본문 바로가기
bar_progress

Text Size

Close

[Reporter’s Notebook] AI Regulation Guidelines Must Become the Cornerstone for Strengthening the Ecosystem

[Reporter’s Notebook] AI Regulation Guidelines Must Become the Cornerstone for Strengthening the Ecosystem

“The biggest issue we felt while preparing the service is the lack of clear regulations.”


This was the comment from a startup representative met at a forum last month. This startup, which trains artificial intelligence (AI) with data 공개ed both online and offline to provide 3D information, could not find clear regulations related to data training. The AI industry, currently receiving the most attention, is engulfed in anxiety over when and what kind of regulations might arise that could halt services.


Therefore, the “Policy Direction for Safe Use of Personal Information in the AI Era” announced by the Personal Information Protection Commission on the 3rd has attracted attention. It does not present specific AI-related laws. Instead of detailed “regulations,” it is a guideline centered on “principles.” However, it is significant in that it reduces the uncertainty companies currently feel by presenting the future direction of AI policy.


In the policy direction revealed this time, the Personal Information Protection Commission’s choice to set the policy direction using a relatively less stringent negative regulation (post-regulation) approach rather than a positive regulation (pre-regulation) approach is noteworthy. Positive regulation lists what is allowed and prohibits everything else. In contrast, negative regulation allows everything unless it is explicitly prohibited. The United States basically operates on a negative system, while South Korea operates on a positive system.


OpenAI’s ChatGPT, which sparked the global AI craze last year, was also able to grow under a negative regulation framework. U.S. regulators clearly specified points that must be observed, such as AI not having bias and prohibiting discrimination using personal information. On the premise that only these must be followed, OpenAI was able to rapidly train AI with large-scale data and create ChatGPT.


There are opinions that South Korea should follow the European Union (EU) approach, which focuses on strong AI regulations. There are few companies developing and servicing AI in Europe. There is great anxiety that overseas services like ChatGPT could leak their country’s information. Therefore, in some countries, ChatGPT has even been blocked.


However, the domestic situation is different. Naver and Kakao are developing super-large AI models, and numerous startups have already launched AI services recognized worldwide. To avoid falling behind in the AI hegemony competition, growth is needed first rather than regulation.


The Personal Information Protection Commission will establish an “AI Privacy Task Force” to support rapid legal interpretation related to personal information during AI development and service. It will also introduce a “pre-appropriateness review system” to minimize the uncertainty and risks companies feel. The application of a regulatory sandbox is also being considered. All of these focus on supporting corporate growth. We hope that this announcement by the Personal Information Protection Commission will become a cornerstone for the growth of the domestic AI ecosystem.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top