The government is pushing for a plan to mandate watermarking on artificial intelligence (AI)-generated content to counter fake news exploiting deepfakes. According to the "Plan to Establish a New Digital Order" reported by the Ministry of Science and ICT at the Cabinet meeting on the 21st, the government intends to enact and amend related laws and establish a system to respond to the entire cycle of fake news creation, distribution, and spread.
'AI watermark' refers to logos or texts embedded in digital images or documents created using AI technology. It serves to indicate the owner or the original source of the material, aiming to prevent illegal copying or unauthorized use and to strengthen the rights of the owner. It is also used for copyright protection and source authentication. In generative AI, it can function to easily identify that the content was created by generative AI, revealing the possibility of misinformation or preventing confusion among users.
On the 7th, OpenAI unveiled a tool that can determine whether a specific image was created using its image generation AI, DALL·E 3.
Already, governments around the world have introduced or are preparing regulations on watermarking AI outputs due to concerns about the indiscriminate distribution of fake information generated by generative AI. The European Union (EU) mandated separate labeling on AI-generated content through the Digital Services Act (DSA) in August last year, and in the United States, an executive order signed by President Joe Biden in October last year included provisions to strengthen identification mechanisms for generative AI outputs.
In particular, major U.S. tech companies have begun applying "invisible" watermarks to generative AI services. These watermarks are not visible to general users but can be detected through specific websites or programs that can verify the watermark. Such invisible watermarks are used not only to verify the source of content but also to track how images created through each company's services are distributed.
OpenAI announced on the 14th that it will apply watermarking to the text-to-speech (TTS) service "Voice Engine" while introducing the voice-interactive AI "GPT-4o." This is because voice generation technology has a high potential for misuse in crimes such as voice phishing.
Our government plans to first introduce "visible" watermarks on outputs created by generative AI. This is to ensure that users recognize in advance that the content is AI-generated and to prevent damage caused by misuse.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

