
OpenAI, maker of ChatGPT, on Tuesday announced its plan to introduce tools to combat misinformation and disinformation as several countries across the world gear up for elections this year.
The global artificial intelligence revolution began with the unexpected success of text generator ChatGPT; however, it also raised concerns regarding the spread of disinformation.
It should be noted that 78 countries, including Pakistan, India, the United States and Britain, will hold 83 national-level executive or legislative elections this year.
Keeping in view the situation, OpenAI said it will not allow its tech — including ChatGPT and the image generator DALL-E 3 — to be used for political campaigns.
"We want to make sure our technology is not used in a way that could undermine" the democratic process, OpenAI said in a blog post.
"We’re still working to understand how effective our tools might be for personalized persuasion," it added.
"Until we know more, we don´t allow people to build applications for political campaigning and lobbying."
The World Economic Forum, in a report released last week, warned that among the biggest short-term global risks are AI-driven misinformation and disinformation and they have the potential of undermining newly elected governments in major economies.
Fears over election disinformation began years ago, but the public availability of potent AI text and image generators has boosted the threat, experts say, especially if users cannot easily tell if the content they see is fake or manipulated.
OpenAI said it was working on tools that would attach reliable attribution to text generated by ChatGPT, and also give users the ability to detect if an image was created using DALL-E 3.
"Early this year, we will implement the Coalition for Content Provenance and Authenticity´s digital credentials -- an approach that encodes details about the content´s provenance using cryptography," the company said.
The coalition, also known as C2PA, aims to improve methods for identifying and tracing digital content. Its members include Microsoft, Sony, Adobe and Japanese imaging firms Nikon and Canon.
OpenAI further added that ChatGPT, when asked procedural questions about US elections such as where to vote, will direct users to authoritative websites.
"Lessons from this work will inform our approach in other countries and regions," the company said.
It added that DALL-E 3 has "guardrails" that prevent users from generating images of real people, including candidates.
OpenAI’s announcement follows steps revealed last year by US tech giants Google and Facebook parent Meta to limit election interference, especially through the use of AI.