I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    It’s just fancy predictive text like while texting on your phone. It guesses what the next word should be for a lot more complex topics.

    • k110111@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Its like saying an OS is just a bunch of if then else statements. While it is true, in practice it is far far more complicated.