Bill Proposes Mandatory Labeling of AI-Generated Content
AI Basic Act Leaves Distribution Stage Unprotected
Calls for Labeling Obligations and Swift Removal Measures
Jo Incheol, Member of the Democratic Party of Korea.
Jo Incheol, Member of the Democratic Party of Korea (representing Gwangju Seo-gu Gap), sponsored two package bills on January 20, aimed at introducing a labeling system for AI-generated content and enabling swift responses to false and exaggerated AI advertisements. The bills include amendments to the Information and Communications Network Act and the Act on the Establishment of the Broadcasting and Communications Review Board.
These bills were prepared in response to concerns that AI-generated content, such as deepfakes, is rapidly spreading through information and communications networks, creating an environment where users find it difficult to discern what is true, yet there are insufficient institutional mechanisms to regulate such content at the distribution stage, including on platforms.
According to Jo Incheol's office, with the recent advancement of technology, even general users are struggling to distinguish the authenticity of AI-generated content, which is being indiscriminately distributed through social media and platforms. For instance, in the case of an AI-synthesized "fake police dispatch" video, many users mistook it for a real event, leading to social confusion.
This environment is particularly critical for digitally vulnerable groups, who have relatively low digital information literacy.
According to the Ministry of Science and ICT's "2023 Digital Information Gap Survey," the digital literacy level of the elderly is 70.7%, the lowest among information-vulnerable groups. There are concerns that children and the elderly, who routinely consume YouTube and social media, are highly likely to suffer physical or financial harm if they are exposed to manipulated expert videos or deepfakes and believe them to be genuine.
The "AI Basic Act," which will take effect on January 22, stipulates notification and labeling obligations for generative AI outputs. However, this regulation is limited to "AI operators" who develop AI or use it to provide products and services. There remains a legislative gap regarding the labeling and management responsibilities at the distribution and dissemination stage, such as on portals and platforms, which limits the protection of the public.
The amendment to the Information and Communications Network Act, proposed by Jo Incheol, seeks to address this institutional gap by: ▲ (Platform) imposing obligations on platform operators to maintain and manage AI-generated content labeling by uploaders and users; ▲ (Uploader) requiring those who directly create or edit and upload AI-generated content to label it as such; ▲ (User) prohibiting the arbitrary removal or damage of AI-generated content labels.
Jo Incheol's office explained that these measures are intended to provide users with at least a basic standard for determining whether information is real or AI-generated.
Additionally, the amendment to the Act on the Establishment of the Broadcasting and Communications Review Board, which was also proposed as part of the package bills, aims to address delays in reviewing false and exaggerated advertisements using AI. It institutionalizes the inclusion of unfair advertisements in areas directly related to public health, such as pharmaceuticals, cosmetics, and medical devices, as subjects for written review, enabling urgent responses.
Jo Incheol stated, "As AI technology advances, it has become difficult for the public to distinguish whether the information they encounter is real or generated by AI, yet institutional measures to protect users remain insufficient. While the AI Basic Act addresses the responsibilities of AI developers, this bill clarifies the responsibility for public protection at the distribution stage, such as on platforms, and aligns with the government's policy direction in many respects."
He added, "The intention is not to stifle the development of AI technology, but to establish basic standards for user protection so that innovation can continue in an environment of trust. The National Assembly and the government must act swiftly to fill the institutional gaps so that not only digitally vulnerable groups but all citizens are not left defenseless against the threat of deepfakes."
Finally, Jo Incheol stated, "Starting with clear labeling of AI-generated content on information and communications networks, I will continue to promote relevant legislation to ensure a balance between the development of the AI industry and user protection."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

