I think AI is neat.

  • usualsuspect191@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    Even if LLM’s can’t be said to have ‘true understanding’ (however you’re choosing to define it), there is very little to suggest they should be able to understand predict the correct response to a particular context, abstract meaning, and intent with what primitive tools they were built with.

    Did you mean “shouldn’t”? Otherwise I’m very confused by your response

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      No, i mean ‘should’, as in:

      There’s no reason to expect a program that calculates the probability of the next most likely word in a sentence should be able to do anything more than string together an incoherent sentence, let alone correctly answer even an arbitrary question

      It’s like using a description for how covalent bonds are formed as an explanation for how it is you know when you need to take a shit.