Latest

AI Misinformation Could Rock the 2024 Elections—Here’s How OpenAI Plans to Fight It

AI Misinformation Could Disrupt the 2024 Elections—Here’s How OpenAI Plans to Combat It

With the growing concern of artificial intelligence posing a threat to democracy, OpenAI has unveiled its strategy to promote transparency in AI-generated content and enhance reliable voting information for the upcoming 2024 elections.

Since the release of GPT-4 in March, generative AI and its potential misapplication, including the creation of AI-generated deepfakes, have become a central issue surrounding AI’s rapid growth in 2023. In 2024, the adverse effects of such AI-driven misinformation could have serious ramifications, particularly during significant elections like the U.S. presidential race.

OpenAI stated in a blog post, “As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.”

The organization also emphasized the coordination of experts from various teams, including safety systems, threat intelligence, legal, engineering, and policy, to swiftly investigate and address potential abuses related to AI.

To give a snapshot of their preparations for the 2024 elections, OpenAI tweeted the following actions:
– Working to prevent abuse, including misleading deepfakes.
– Providing transparency on AI-generated content.
– Improving access to authoritative voting information.

Concerns about AI-generated campaign ads led the U.S. Federal Election Commission to consider a petition to prohibit them. However, First Amendment considerations complicate the issue, as expressed by FEC Commissioner Allen Dickerson.

OpenAI plans to direct U.S. users of their service, ChatGPT, to the non-partisan website CanIVote.org when they inquire about specific procedural election-related questions. The implementation of these changes will inform OpenAI’s approach globally.

“We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead-up to this year’s global elections,” OpenAI added.

In ChatGPT, OpenAI prohibits developers from creating chatbots that pretend to be real individuals or institutions like government officials or offices. Applications that aim to discourage voting or misrepresent voter eligibility are also not allowed.

AI-generated deepfakes, fake images, videos, and audio created using generative AI, gained significant attention last year, featuring prominent figures such as U.S. President Joe Biden, former President Donald Trump, and even Pope Francis. OpenAI intends to prevent the improper use of its image generator, Dall-E 3, in deepfake campaigns by adopting content credentials developed by the Coalition for Content Provenance and Authenticity. These credentials will add a mark or “icon” to AI-generated images, verifying their authenticity.

OpenAI has also been experimenting with a provenance classifier, a tool designed to detect images generated by Dall-E. The early results from their internal testing have been promising, even when images undergo common modifications.

In addition, Pope Francis has called for a binding international treaty to regulate AI, emphasizing respect for human dignity and the promotion of peace.

To combat misinformation, OpenAI announced that ChatGPT will begin offering real-time news reporting globally, complete with citations and links.

“The transparency of information sources and the balance in news reporting can empower voters to better assess information and make informed decisions,” stated OpenAI.

Last summer, OpenAI donated $5 million to the American Journalism Project and partnered with the Associated Press to gain access to their archive of news articles.

OpenAI’s commitment to attribution in news reporting arises as the company faces multiple copyright lawsuits, including one from the New York Times. The lawsuit alleges that OpenAI and Microsoft (OpenAI’s largest investor) used millions of New York Times articles without permission to train ChatGPT. OpenAI has disputed the lawsuit, claiming that the New York Times manipulated its prompts to influence the chatbot’s responses.

Edited by Andrew Hayward