Amid Strengthening AI Regulations in Korea, the EU, and Beyond
Seminar on Corporate Strategies for Responding to the AI Basic Act
As ethical and legal risks associated with the advancement of artificial intelligence (AI) technology continue to grow and countries around the world strengthen regulations, there is a recommendation that Korean companies should manage AI risks on a different level from existing risks and establish new governance structures.
Moon Honggi, CEO of PwC Consulting, is delivering the opening remarks at the seminar "Artificial Intelligence (AI) Basic Act and AI Risk Response Strategies for Corporate Survival," held on the 29th at Amore Hall in Yongsan-gu, Seoul. Photo by Samil PwC
On the 29th, PwC Consulting announced that it had held a seminar titled "Artificial Intelligence (AI) Basic Act and AI Risk Response Strategies for Corporate Survival" at Amore Hall in Yongsan-gu, Seoul. The announcement was made on the 30th. The purpose of this seminar was to enhance understanding of the "Basic Act on the Development of Artificial Intelligence and the Establishment of a Trust-Based Environment" (hereinafter referred to as the AI Basic Act), which is scheduled to take effect on January 21 next year, and to present response strategies for companies. The AI Basic Act was enacted with the goal of making Korea one of the "three major AI powerhouses," following the United States and China, and was passed by the National Assembly in December last year. The Act includes provisions for establishing national-level AI governance, supporting the development of the AI industry, and enhancing ethics and transparency.
In the first session, Kim Sunhee, an attorney at Yulchon LLC, discussed the main purpose of the AI Basic Act, the scope of regulation, the key obligations of AI business operators, and sanctions for violations, under the theme "AI Basic Act: Legal Understanding and Considerations." She advised, "Companies must first determine whether the AI they have introduced falls under high-impact, generative, or high-performance AI." She also noted, "Although the enforcement decree has not yet been finalized, companies should actively provide feedback and thoroughly prepare in advance to ensure that regulations move in a desirable direction."
In the second session, Park Hyunchul, Leader of the Risk and Regulatory Platform, introduced global AI regulatory trends under the theme "AI Risk: How Should Companies View and Prepare for It?" In particular, he predicted that the European Union's AI Act, which will be fully enforced starting August 2, will have a significant impact on the future direction of global AI regulations.
Park cited examples of companies that have proactively managed AI risks in response to regulations, including Standard Chartered, Lloyds Banking Group, Siemens, and Sage. He then presented three key principles for AI risk management: first, restructuring organizational governance; second, establishing operational models according to the new AI lifecycle, including design, development, deployment, operation, application, and disposal; and third, operating monitoring and feedback systems for AI in the field. Park emphasized, "AI risk must be treated as a completely different category from existing corporate risks," and added, "Rather than simply improving efficiency through AI, companies need to redesign their risk management methods and decision-making paradigms."
In the third session, Yoon Yeohyun and Lee Sungho, partners at the Risk and Regulatory Platform, shared approaches to AI risk from the perspectives of control and security. First, Partner Yoon explained cases of responding to data, model, operational, and ethical risks under the theme "AI Risk from an Internal Control Perspective." He stated, "While companies see AI security and potential intellectual property (IP) infringement as major risk factors, the greatest risk may be the differences in individual perceptions of AI risk and the lack of expertise and capabilities." He continued, "Rather than clumsy intervention, involvement and monitoring by experts are necessary."
Next, Partner Lee explained in detail the impacts of AI attacks, such as abnormal behavior, judgment and response errors, and personal information leaks, as well as the types of attacks based on AI components, under the theme "AI Risk from a Security Perspective." He introduced the concept of an "AI Red Team," which defines testing scenarios, conducts AI testing, analyzes vulnerabilities, and takes action in preparation for such attacks. He also shared a case in which he built Korea's first "AI Guard," which provides safe responses to malicious attacks, and applied it to some companies.
Moon Honggi, CEO of PwC Consulting, stated, "Companies must go beyond simply adopting technology and establish governance systems across all aspects of AI, as well as prepare preemptive risk response strategies." He added, "I hope this seminar will serve as an opportunity to gather the voices of various stakeholders regarding regulation."
Meanwhile, the Risk and Regulatory Platform, composed of experts from Samil PwC and PwC Consulting, provides consulting services to companies on ethical, legal, and security risks arising from technological advancement.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

