KCC and KCSC Hold 'Deepfake Response Forum'
Comprehensive Measures to be Prepared Next Month... Legal and Institutional Improvements Planned
Development of AI Identification Watermark... Emphasis on 'Pinpoint Regulation'
"From the perspective of those in educational settings protecting children, I believe that even if the pace of artificial intelligence (AI) technology development is somewhat slow, it should progress safely. A typical example of a bad habit formed at age three lasting until eighty is 'deepfake.'"
Lee Sang-ryong, Superintendent at Busan Metropolitan Office of Education, came to Seoul on the 12th to attend the 'Deepfake Response Expert Forum' held at the Broadcasting Hall in Mokdong, Seoul. During the two-hour forum, he was given only three minutes to speak. While explaining Busan City's countermeasures against deepfake sexual crimes, he left the above remark as his final statement.
In Busan, education aimed at preventing deepfake sexual crimes is conducted not only for students but also for faculty members and parents. This week has been designated as a special prevention education week to raise awareness. The Busan Office of Education, which has been recognized for its proactive response compared to other regions, also signed a digital sexual crime response business agreement with the Korea Communications Standards Commission, marking the first such agreement between a regional education office and the commission.
Inter-Ministerial Task Force to Prepare Countermeasures by October
Deepfake is a representative harm and side effect emerging as AI technology advances and becomes a social problem. Professor Choi Kyung-jin of Gachon University stated, "Deepfake pornography can now be created easily, quickly, and meticulously without cost by anyone," emphasizing its seriousness by adding, "It damages the victim's dignity and makes complete recovery of daily life impossible."
Moon Ki-hyun, Director of the Seoul Digital Sexual Crime Safety Support Center, said, "Victims cannot go to school or leave their homes because they fear someone has seen their videos." He added, "Digital sexual crimes are difficult to identify who the perpetrators or distributors are, and even if the videos are deleted, it is impossible to know when they will be redistributed."
The problem is that 3 out of 10 victims are teenagers. Analyzing victims by age group, teenagers accounted for the highest percentage at 36.3%, and perpetrators were also mostly teenagers (31.4%). This is attributed to teenagers' rapid adoption of AI technology and the tendency to start producing and distributing illegal deepfake videos out of curiosity or as a prank.
Deepfake sexual crimes span various areas including prevention education, perpetrator punishment, victim protection, social responsibility of online platforms, and development of AI-generated content identification technology. Therefore, an inter-ministerial task force (TF) involving relevant government departments has been formed, and a comprehensive plan is expected to be released by next month.
The Korea Communications Commission, included in the government-wide TF, co-hosted this forum with the Korea Communications Standards Commission and the Korea Viewer Media Foundation. Kim Woo-seok, Director of the Digital Harmful Information Response Division at the Korea Communications Commission, said, "We will amend the law to include measures to delete and block not only illegal videos online but also personal information."
Additionally, major internet service providers regularly submit 'transparency reports' detailing the deletion and blocking of illegal filming materials. The Korea Communications Commission plans to strengthen the contents of these reports and impose penalties for false reporting.
Promotion of AI-Generated Content Watermark Introduction
The Korea Communications Commission is promoting the introduction of a watermark system for AI-generated content. This involves online platform companies developing related technology to insert identifying marks when users upload AI-generated content to websites or social networking services (SNS). AI technology developers are also preparing methods to identify AI-generated content, and regulations for penalties in case of violations are under review. During the forum, opinions were raised that "acts circumventing or damaging watermark marks should also be prohibited."
Director Kim said, "There are claims that such regulations could dampen creative motivation in fields like entertainment," but he explained, "A method to distinguish the purpose of AI-generated content through a presidential decree can be utilized."
Experts agreed that since AI technology is deeply embedded in daily life, responses should be through 'pinpoint regulation,' and that overly strict regulations infringing on freedom of expression are inappropriate.
Professor Choi said, "It is necessary to accurately understand the severity and level of harm of the problem and establish a regulatory system that precisely targets it." He also emphasized, "An environment and atmosphere should be created where online platform operators can actively engage in self-regulation."
Meanwhile, the forum also raised the need for 'media literacy' education covering the potential misuse of deepfake technology, ethical responsibility, and appropriate SNS usage. Although some argued that possession or viewing of deepfake sexual crime materials, like child and adolescent sexual exploitation materials, should be punishable, there were also opinions questioning the enforceability and effectiveness of such measures.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
