Throughout history many traditions have believed that some fatal flaw in human nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings and threatening to burn the Earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.

Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled The Sorcerer’s Apprentice. Goethe’s poem (later popularised as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, such as fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an axe, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control.

  • Treedrake@fedia.io
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    4 months ago

    Luckily the only “AI” we have are LLMs which seem to have hit their peak, and probably will start corrupting itself with its own training data now that they’ve scoured the web clean.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      4 months ago

      LLM’s on their own aren’t much a concern. What is a concern is strapping weapons to one of those Boston Dynamics robots, loading an LLM, and training it to kill.

      Governments already kill based on metadata — analyzed by statistical models — so the above isn’t far from reality.

      • xmunk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        “Turn it on, let us kill our enemies”

        immediately starts quoting Shakespeare

        I am uncertain why you think an LLM would be well suited to this task - it’s an inappropriate model for that function…

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          4 months ago

          An LLM = machine learning. The language part is largely irrelevant. It finds patterns in 1’s and 0’s, and produces results based on statistical probability. This can be applied to literally anything that can be represented in 1’s and 0’s (e.g. everything in the known universe).

          Do you not understand how that could be used to target “terrorists”, or how it could be utilized by a killbot? They can fine tune what metadata = “terrorist”, but (most importantly) false positives are a guaranteed mathematical certainty of statistical models, meaning innocent people are guaranteed to be classified as “terrorist”. Then there’s the more pressing concern of who gets to define what a “terrorist” is.

          • JackGreenEarth@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 months ago

            LLM (Large Language Model) != ML (Machine Learning)

            LLM is a subset of ML, but they are not the same

    • ravhall@discuss.online
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      4 months ago

      I think there’s still a lot of room to grow with LLMs, but nothing will ever be 100% trustworthy. Especially the human brain.

  • mashbooq@infosec.pub
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    4 months ago

    Sigh, another major thinker who totally misunderstands LLMs and their capabilities. The fact that he cites Musk as a credible source on “AI” says it all.

  • Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    4 months ago

    A 2014 survey of British MPs – charged with regulating one of the world’s most important financial hubs – found that only 12% accurately understood that new money is created when banks make loans.

    I don’t really expect most people to know this one, but 12% of British parliamentarians is a little disappointing.

    • APassenger@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      When 11 people all own the same dollar, there’s more dollars.

      It’s a 1 minute explanation I got as a junior in high school over 30 years ago. It’s not hard to remember either. Banking changed things.

  • dan1101@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    What we have now is not the AI we need to fear. The only thing to fear in LLMs is blindly trusting them.

    • APassenger@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Layoffs are occurring and LLMs are being cited as taking those jobs.

      While I’m not concerned an LLM will take my job - nor that my leadership would do that - others do not have that luxury.

      Sucks that we’re here, but it’s happening to some.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    8
    ·
    edit-2
    4 months ago

    Ah yes, let’s use the famously true stories of ancient mythology to prove a point about modern technology. That will definitely not be full of logical fallacies.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        4 months ago

        Okay but again, those stories are fiction from history. It’s silly to look at fiction as a source of authoritative truth.

        • yesman@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          4 months ago

          This is a bad take. Sometimes fiction is the best way to understand history. Furthermore, since authors are people in history, the often provide something more valuable than the outcome of a battle or the death of a King: the ideas, mood, and cultural of normal people in historical context.

          Traditional Histories are crippled by survivor bias and are centered around elites.