I think AI is neat.

  • casmael@lemm.ee
    link
    fedilink
    arrow-up
    32
    arrow-down
    3
    ·
    11 months ago

    Yeah this sounds about right. What was OP implying I’m a bit lost?

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      50
      arrow-down
      8
      ·
      11 months ago

      I believe they were implying that a lot of the people who say “it’s not real AI it’s just an LLM” are simply parroting what they’ve heard.

      Which is a fair point, because AI has never meant “general AI”, it’s an umbrella term for a wide variety of intelligence like tasks as performed by computers.
      Autocorrect on your phone is a type of AI, because it compares what words you type against a database of known words, compares what you typed to those words via a “typo distance”, and adds new words to it’s database when you overrule it so it doesn’t make the same mistake.

      It’s like saying a motorcycle isn’t a real vehicle because a real vehicle has two wings, a roof, and flies through the air filled with hundreds of people.

      • ALostInquirer@lemm.ee
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        11 months ago

        Which is a fair point, because AI has never meant “general AI”, it’s an umbrella term for a wide variety of intelligence like tasks as performed by computers.

        Do you mean in the everyday sense or the academic sense? I think this is why there’s such grumbling around the topic. Academically speaking that may be correct, but I think for the general public, AI has been more muddled and presented in a much more robust, general AI way, especially in fiction. Look at any number of scifi movies featuring forms of AI, whether it’s the movie literally named AI or Terminator or Blade Runner or more recently Ex Machina.

        Each of these technically may be presenting general AI, but for the public, it’s just AI. In a weird way, this discussion is sort of an inversion of what one usually sees between academics and the public. Generally academics are trying to get the public not to use technical terms loosely, yet here some of the public is trying to get some of the tech/academic sphere to not, at least as they think, use technical terms loosely.

        Arguably it’s from a misunderstanding, but if anyone should understand the dynamics of language, you’d hope it would be those trying to calibrate machines to process language.

      • ParsnipWitch@feddit.de
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        I’ve often seen people on Lemmy confidently state that current “AI” thinks and learns exactly like humans and that LLMs work exactly like human brains, etc.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Are you sure this wasn’t just people stating that when it comes to training on art there is no functional difference in the sense that both humans and AI need to see art to make it?

    • Redacted@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      7
      ·
      11 months ago

      I believe OP is attempting to take on an army of straw men in the form of a poorly chosen meme template.

    • XEAL@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      I guess that no matter what they are or what you call them they still can be useful

    • iheartneopets@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Pretty sure the meme format is for something you get extremely worked up about and want to passionately tell someone, even in inappropriate moments, but no one really gives a fuck

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      13
      ·
      11 months ago

      People who don’t understand or use AI think it’s less capable than it is and claim it’s not AGI (which no one else was saying anyways) and try to make it seem like it’s less valuable because it’s “just using datasets to extrapolate, it doesn’t actually think.”

      Guess what you’re doing right now when you “think” about something? That’s right, you’re calling up the thousands of experiences that make up your “training data” and using it to extrapolate on what actions you should take based on said data.

      You know how to parallel park because you’ve assimilated road laws, your muscle memory, and the knowledge of your cars wheelbase into a single action. AI just doesn’t have sapience and therefore cannot act without input, but the process it does things with is functionally similar to how we make decisions, the difference is the training data gets input within seconds as opposed to being built over a lifetime.

      • Xavienth@lemmygrad.ml
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        11 months ago

        If you’ve ever actually used any of these algorithms it becomes painfully obvious they do not “think”. Give it a task slightly more complex/nuanced than what it has been trained on and you will see it draws obviously false conclusions that would be obviously wrong had any thought actual taken place. Generalization is not something they do, which is a fundamental part of human problem solving.

        Make no mistake: they are text predictors.

      • Dr. Jenkem@lemmy.blugatch.tube
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        11 months ago

        People who aren’t programmers, haven’t studied computer science, and don’t understand LLMs are much more impressed by LLMs.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          11 months ago

          That’s true of any technology. As someone who is a programmer, has studied computer science, and does understand LLMs, this represents a massive leap in capability. Is it AGI? No. Is it a potential paradigm shift? Yes. This isn’t pure hype like Crypto was, there is a core of utility here.

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            Crypto was never pure hype either. Decentralized currency is an important thing to have, it’s just shitty it turned into some investment speculative asset rather than a way to buy drugs online without the glowies looking

          • R0cket_M00se@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            11 months ago

            Yeah I studied CS and work in IT Ops, I’m not claiming this shit is Cortana from Halo, but it’s also not NFTs. If you can’t see the value you haven’t used it for anything serious, cause it’s taking jobs left and right.