Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

  • frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    1 month ago

    Techbros once again surprised at how their technology is used.

    The other breaking headlines for today:

    Shock discovery that water is wet.

    Toddler discovers that fire is hot after touching it.

    Bear shits in woods.

    Pope revealed to be Catholic.

    • eveninghere@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 month ago

      We aren’t naive. We all knew this will happen. But, as it happened, it was better than banning AI in the free world and giving dictators advantages in AI tech.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 month ago

        Ah, the old “the only way to stop a bad person with a gun is for all the good people to have guns” argument.

        Were the dictators even working on their own large language models, or do these tools only exist because OpenAI made one and released it to the public before all the consequences had been considered, thus sparking an arms race where everyone felt the need to jump in on the action? Because as far as I can see, ChatGPT being used to spread disinformation is only a problem because OpenAI were too high on the smell of their own arses to think about whether making ChatGPT publicly available was a good idea.

        • davehtaylor@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          Exactly.

          Gods this whole “if we outlaw AI only outlaws will have AI” bullshit is so so tiresome and naive

      • Ilandar@aussie.zone
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        it was better than banning AI in the free world and giving dictators advantages in AI tech.

        The US doesn’t need to ban AI. It just needs to stop publicly deploying it, untested and unregulated, on the masses. And some of these big tech companies need to stop releasing open models that can be easily obtained and abused by bad actors. Dictatorships don’t actually like AI internally, because it threatens their control of the narrative within their country. For example, the CCP has been very cautious of it when compared to the US because it is concerned about how it could be employed against the party.

        And this whole arms race argument sort of ignores the fact that the US continuing to mass deploy this shit at breakneck speed is already giving the dictators the advantages they need to fuck with democracy. No one needs to have a real war with the US if it starts one with itself.

    • davehtaylor@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      You mean to tell me that the purpose-built disinformation machine that you developed has been used by malicious actors to spread disinformation?!

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

    As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation.

    OpenAI claimed its researchers found and banned accounts associated with five covert influence operations over the past three months, which were from a mix of state and private actors.

    An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

    The US treasury sanctioned two Russian men in March who were allegedly behind one of the campaigns that OpenAI detected, while Meta also banned Stoic from its platform this year for violating its policies.

    OpenAI stated that it plans to periodically release similar reports on covert influence operations, as well as remove accounts that violate its policies.


    Saved 67% of original text.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    Before there was ChatGPT to blame, we had the “50 cent army.” The fully automated bullshit generators are a cheaper but also a less effective way to do what was already being done. I expect the real problem is somewhere closer to the design of mass social media and the human weaknesses it has evolved to exploit, not so much the super-human powers of generative AI which have been so greatly exaggerated.