• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.

    • Hazzard@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

      Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.

    • GregorGizeh@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      It isnt so much “we" as in humanity, it is a select few very ambitious and very reckless corpos who are pushing for this, to the detriment of the rest (surprise).

      If “we” were able to reign in our capitalists we could develop the technology much more ethically and in compliance with the public good. But no, we leave the field to corpos with delusions of grandeur (does anyone remember the short spat within the openai leadership? Altman got thrown out for recklessness, investors and some employees complained, he came back and the whole more considerate and careful wing of the project got ousted).

    • MonkderDritte@feddit.de
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings

      Almost like children.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      I find that a lot of the reasons people put up for saying “LLMs are not intelligent” are wishy-washy, vague, untestable nonsense. It’s rarely something where we can put a human and ChatGPT together in a double-blind test and have the results clearly show that one meets the definition and the other does not. Now, I don’t think we’ve actually achieved AGI, but more for general Occam’s Razor reasons than something more concrete; it seems unlikely that we’ve achieved something so remarkable while understanding it so little.

      I recently saw this video lecture by a neuroscientist, Professor Anil Seth:

      https://royalsociety.org/science-events-and-lectures/2024/03/faraday-prize-lecture/

      He argues that our language is leading us astray. Intelligence and consciousness are not the same thing, but the way we talk about them with AI tends to conflate the two. He gives examples of where our consciousness leads us astray, such as seeing faces in clouds. Our consciousness seems to really like pulling faces out of false patterns. Hallucinations would be the times when the error correcting mechanisms of our consciousness go completely wrong. You don’t only see faces in random objects, but also start seeing unicorns and rainbows on everything.

      So when you say that people were convinced that ELIZA was an actual psychologist who understood their problems, that might be another example of our own consciousness giving the wrong impression.

      • vcmj@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

        I’m still working on this definition, again just a personal viewpoint.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      I wouldn’t be surprised if someday when we’ve fully figured out how our own brains work we go “oh, is that all? I guess we just seem a lot more complicated than we actually are.”

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Something of the sort has already been claimed for language/linguistics, i.e. that LLMs can be used to understand human language production. One linguist wrote a pretty good reply to such claims, which can be summed up as “this is like inventing an airplane and using it to figure out how birds fly”. I mean, who knows, maybe that even could work, but it should be admitted that the approach appears extremely roundabout and very well might be utterly fruitless.

      • BigMikeInAustin@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        True.

        That’s why consciousness is “magical,” still. If neurons ultra-basically do IF logic, how does that become consciousness?

        And the same with memory. It can seem to boil down to one memory cell reacting to a specific input. So the idea is called “the grandmother cell.” Is there just 1 cell that holds the memory of your grandmother? If that one cell gets damaged/dies, do you lose memory of your grandmother?

        And ultimately, if thinking is just IF logic, does that mean every decision and thought is predetermined and can be computed, given a big enough computer and the all the exact starting values?

        • huginn@feddit.it
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          You’re implying that physical characteristics are inherently deterministic while we know they’re not.

          Your neurons are analog and noisy and sensitive to the tiny fluctuations of random atomic noise.

          Beyond that: they don’t do “if” logic, it’s more like complex combinatorial arithmetics that simultaneously modify future outputs with every input.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            Though I should point out that the virtual neurons in LLMs are also noisy and sensitive, and the noise they use ultimately comes from tiny fluctuations of random atomic noise too.