A case has occurred in Jeonju, Jeollabuk-do, where male middle school students synthesized photos of teachers and classmates' faces with nude images and distributed them. These middle school students are under police investigation for producing and distributing obscene materials using deepfake technology powered by artificial intelligence (AI). Deepfake crimes are emerging as a social issue. Deepfake is a portmanteau of deep learning and fake, referring to edited images, audio, and videos created using AI. Recently, cases of spreading fake videos and fake news by abusing deepfake technology have been increasing.
However, no clear measures to eradicate this have been proposed. Even if the perpetrator is identified, it is difficult to prove the charges and punish them. The so-called ‘Seoul National University deepfake sex crime’ case exemplifies this issue. When deepfake obscene materials are distributed on platforms with servers overseas, it is not easy to find the perpetrator, and even if found, proving the charges is complicated. If the perpetrator claims that the deepfake obscene materials were for personal storage without intent to distribute, it is practically difficult to punish them.
◆ More Intelligent Deepfakes with Generative AI
According to the Korean Intellectual Property Office and the Korea Intellectual Property Research Institute on the 1st, deepfake crimes are identified as a representative side effect of generative AI. Generative AI is an AI that creates new works through comparative learning with existing data, and besides deepfake crimes, it is a hot topic in the field of intellectual property rights. In the intellectual property field, AI has reached a level where it can create copyrighted works such as music and paintings and even inventions with minimal human intervention, generating related issues.
Generative AI surged in prominence with the release of ChatGPT by the U.S. AI developer ‘OpenAI’ in November 2022. The convenience of ChatGPT significantly increased the number of generative AI users. However, there are no adequate measures to prevent misuse. Recently, the responsibility of generative AI service providers has come to the forefront. Currently, the legal nature and scope of responsibility of service providers are unclear, making it difficult to hold them directly accountable for problems arising during service provision.
Separately from punishing deepfake crime perpetrators, it is difficult to hold service providers accountable if they have not established measures to prevent the misuse of deepfake technology. The logic is that service providers offer generative AI as a tool, and individual users use the tool based on their own choices and responsibilities, so the ultimate responsibility lies with the users.
However, considering the various social problems caused by generative AI, field experts generally view it as difficult to conclude that service providers bear no responsibility. In this regard, major countries are implementing AI regulations.
◆ Major Countries Implement AI Regulatory Laws... Preventing Copyright Infringement and More
According to the report ‘Trends and Implications of AI Regulation in Major Countries’ released by the Korea Intellectual Property Research Institute, the European Parliament finally approved the ‘Artificial Intelligence Act (AI Act)’ in March, a fundamental law that promotes AI technological innovation while ensuring safety and compliance with fundamental rights.
This law stipulates that distributors of AI systems that generate or manipulate image, audio, and video content (such as deepfakes) must disclose that the content was artificially generated or manipulated, except in very limited cases such as use for crime prevention. This can be used as a means to prevent copyright infringement by generative AI services. In particular, providers of generative AI models are required to design, develop, and train models to prevent the generation of illegal content. This establishes guidelines for service providers’ duty of care to prevent side effects arising during the provision of generative AI services.
Before Europe, the United States saw President Biden sign an executive order in October last year on ‘the development and use of AI with safety, security, and reliability.’ This executive order is the first legally binding federal-level AI regulation.
The local generative AI service industry promised the U.S. government before the executive order to disclose results obtained through system functionality tests and potential risk assessments, while also committing to label (watermark) AI-generated content and protect children from AI content. Regarding intellectual property rights, the U.S. Patent and Trademark Office (USPTO) is required by the executive order to provide guidelines to patent examiners on issues related to the use of AI between inventors and AI and potential patent eligibility, as well as to publish research reports addressing copyright issues raised by AI technology.
China has also been enacting and implementing laws related to generative AI services since last year. In China, the responsibility and degree of accountability of AI service providers have become the most sensitive issues.
The original draft law included provisions imposing the responsibility of content producers on service providers, but the final version was adjusted to require service providers to take timely action upon discovering illegal content and report it to the relevant authorities. The Korea Intellectual Property Research Institute explains that this reflects opinions that the stricter the responsibility imposed on service providers as content producers, the stronger the regulation, which could discourage service providers from developing generative AI technology.
Yoo Gye-hwan, a commissioner at the Korea Intellectual Property Research Institute, and Kim Yoon-myung, director of the Digital Policy Research Institute, stated, “As AI technology development accelerates, the side effects of generative AI technology have recently come into focus. The legislation related to generative AI technology is reaching a turning point, and major countries are repeatedly discussing AI regulation for the same reason.”
They added, “The current law clearly reveals limitations regarding the responsibility of service providers for various problems arising from the use of generative AI. Considering that comprehensive regulation in Korea could hinder AI technology development, it is necessary to specifically identify the characteristics of generative AI systems and regulated entities and prepare partial and phased regulatory measures for high-risk activities.”
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Side Effects of Generative AI like 'Deepfake' Crimes... Is Responsibility Only on Users? [Why&Next]](https://cphoto.asiae.co.kr/listimglink/1/2020050415301818506_1588573819.jpg)
![Side Effects of Generative AI like 'Deepfake' Crimes... Is Responsibility Only on Users? [Why&Next]](https://cphoto.asiae.co.kr/listimglink/1/2024062716071432141_1719472035.jpg)

