OpenAI Shuts Down Election Influence Operation Using ChatGPT Technology
Introduction
In a recent development, OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election. This is not the first time OpenAI has taken action against state-affiliated actors using ChatGPT maliciously.
Background
OpenAI has previously disrupted five campaigns using ChatGPT to manipulate public opinion, which are reminiscent of state actors using social media platforms like Facebook and Twitter to attempt to influence previous election cycles. Now, similar groups (or perhaps the same ones) are using generative AI to flood social channels with misinformation.
The Iranian Influence Operation
OpenAI says its investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence U.S. elections operating since 2020. Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
The Playbook
The playbook, as it has proven to be in other operations, is not necessarily to promote one policy or another but to sow dissent and conflict. OpenAI identified five website fronts for Storm-2035, presenting as both progressive and conservative news outlets with convincing domain names like ‘evenpolitics.com.’ The group used ChatGPT to draft several long-form articles, including one alleging that ‘X censors Trump’s tweets,’ which Elon Musk’s platform certainly has not done (if anything, Musk is encouraging former president Donald Trump to engage more on X).
Social Media Presence
OpenAI says it did not see evidence that Storm-2035’s articles were shared widely and noted a majority of its social media posts received few to no likes, shares, or comments. This is often the case with these operations, which are quick and cheap to spin up using AI tools like ChatGPT.
Implications
Expect to see many more notices like this as the election approaches and partisan bickering online intensifies. The use of generative AI to flood social channels with misinformation is becoming increasingly prevalent, and it’s essential for companies like OpenAI to take proactive measures to prevent these operations from succeeding.
Conclusion
OpenAI’s ban on the cluster of ChatGPT accounts linked to the Iranian influence operation is a significant step in preventing the spread of misinformation. However, it’s essential to remain vigilant and continue monitoring the situation as the election approaches. The use of generative AI by state-affiliated actors is becoming increasingly sophisticated, and companies like OpenAI must stay ahead of these operations to protect public discourse.
Recommendations
- Increased Monitoring: Companies like OpenAI should increase their monitoring efforts to identify and prevent similar operations from succeeding.
- Improved Detection: The development of more advanced detection tools is crucial to identifying the use of generative AI by state-affiliated actors.
- Collaboration: Collaboration between companies, governments, and researchers is essential to stay ahead of these operations and protect public discourse.
Related Topics
- 2024 election
- AI
- ChatGPT
- Generative AI
- Misinformation
- OpenAI
- Security