In Negotiations for License Agreements with Major Media Outlets
On the 15th (local time), OpenAI released a new tool to prevent its services from being misused in elections. This measure aims to curb growing concerns about AI-generated 'deepfake' misinformation ahead of elections scheduled in major countries worldwide this year.
On the same day, OpenAI announced on its blog that it will label the sources of information and images provided by ChatGPT and Dall-E. It will also provide article authors and links to enable real-time information retrieval through ChatGPT. OpenAI is currently negotiating content licensing agreements with major media outlets such as CNN, Fox, and Time magazine.
Images created with Dall-E will include source data such as the creator and creation date to verify whether they were AI-generated. A watermark indicating that the image was generated by Dall-E will also be inserted. Additionally, OpenAI is preparing to launch an image detection tool that can verify whether an image was created by Dall-E. Initially, feedback will be sought from a test group consisting of journalists, platforms, and researchers. Previously, in October last year, Mira Murati, OpenAI’s Chief Technology Officer (CTO), stated that the image detection tool demonstrated an accuracy rate of 99% in internal tests.
OpenAI stated, "Transparency about information sources helps voters better evaluate information and decide for themselves what to trust."
Recently, cases of deepfakes being politically exploited have surged. In May last year, in T?rkiye, a manipulated video claiming that a terrorist group supported an opposition candidate was circulated just before the presidential election, influencing the election outcome. AI-manipulated images such as the arrest photo of former U.S. President Trump and an explosion at the U.S. Department of Defense building also caused significant waves.
Other big tech companies are also introducing measures to prevent AI-generated deepfakes. In December last year, Google announced it would restrict election-related questions on its AI chatbot Bard. Meta, Facebook’s parent company, requires disclosure if AI is used in political advertisements. Google and Adobe also plan to add watermarks to AI-generated images.
However, there are criticisms that these measures alone are insufficient. The Washington Post stated, "Visible watermarks can be easily cropped or edited. Even encrypted images that are invisible to the eye can be distorted by flipping or changing colors."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


