• Hagels_Bagels@lemmygrad.ml
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    Great. Now people are going to read up a bunch of bs generated by a language model and confidently spread around “hallucinations” as facts.

      • salient_one@lemmy.villa-straylight.social
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Probably, though it might be too optimistic to assume that. However, I believe it will still result in more mistakes simply because it’s harder to spot errors in an existing text than to not put errors in the text in the first place by fact-checking beforehand and then having another person proof-read.

        One of the reasons for that is that LLMs don’t feel guilty when they hallucinate while most humans don’t like to lie or be too lazy to fact check, and even if they don’t care about that, they still have to think about getting caught and damaging their reputation, which again LLMs don’t have. And you can’t call stating something false as a fact in an article an honest mistake (it’s negligence at best) unlike an editor’s missing something (due to a looming deadline, perhaps), especially when it’s assumed there won’t be too many hallucinations, which isn’t a certainty.