본문 바로가기
bar_progress

Text Size

Close

When and How Should AI-Generated Content Be Labeled? Government Releases New Standards

Flexible Labeling Allowed for Internal Services,
Clear Marking Mandatory for External Distribution and Deepfakes
Detailed Transparency Standards Set Under AI Basic Act
Differentiated Compliance Requirements for Businesses

The criteria for labeling AI-generated content differ depending on whether it is used within a service or shared externally. In particular, for generative content that poses significant risks of social confusion, such as deepfakes, clear and recognizable labeling for humans is mandatory.


The Ministry of Science and ICT clarified the labeling standards for AI-generated content by service type through the "Artificial Intelligence Transparency Assurance Guidelines," which detail the transparency obligations under the "Artificial Intelligence (AI) Basic Act" set to take effect on January 22, 2026.

When and How Should AI-Generated Content Be Labeled? Government Releases New Standards Case introduced in the 'Artificial Intelligence Transparency Assurance Guidelines.' Provided by the Ministry of Science and ICT

The Ministry of Science and ICT explained that, reflecting industry feedback that it is difficult to apply the law and its enforcement decree alone in practice, the guidelines were organized based on actual operating types of AI products and services. The transparency provisions will be subject to a grace period of at least one year, during which fact-finding investigations and the imposition of fines will be suspended.


The core of the guidelines is differentiated application. When AI-generated content is used only within service environments such as chatbots, games, or metaverse platforms, relatively flexible labeling methods are permitted, such as UI notifications or logo displays. For example, conversational services may provide notifications before use or display indicators within the interface, while games and metaverse services may use notifications at login or character-based indicators.


On the other hand, when AI-generated results are exported outside the service-such as through downloads or sharing-the labeling standards are strengthened. For text, images, videos, and other generative content, visible or audible watermarks that are recognizable by humans must be applied, or a combination of human-readable notices (text or voice) and machine-readable methods such as metadata must be used. For deepfake content that is difficult to distinguish from reality, clear human-recognizable labeling is mandatory without exception.


The guidelines also clarify who is responsible for ensuring transparency. The obligations apply to "AI service providers" who directly offer AI products or services to users, including overseas providers serving users in Korea. In contrast, users who utilize AI as a tool for work or creative purposes are excluded from these obligations.


The Ministry of Science and ICT stated, "Watermarks on AI-generated content are a minimum safety measure to prevent the misuse of deepfakes and reflect a global trend," adding, "We will continue to communicate with the industry during the grace period and further refine and enhance the guidelines to ensure the system is effectively implemented in practice." The full guidelines are available on the websites of the Ministry of Science and ICT and the Korea Information and Communication Technology Association.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top