National Assembly STIBC Holds AI Public Hearing
Industry Voices Concerns Over High-Impact AI, Watermark, and Investigative Authority
Fair Trade Commission's Platform Regulations May Also Hinder AI
Claims have emerged that various regulations, including the "Basic Act on the Development of Artificial Intelligence (AI) and the Establishment of Trust" (AI Basic Act), are obstacles to the advancement of AI technology. It was explained that regulations should be applied gradually, with regulatory targets clearly defined in enforcement ordinances.
On the 25th, Park Seong-ho, chairman of the Korea Internet Corporations Association, attended an AI-related public hearing held by the National Assembly’s Science, Technology, Information and Broadcasting and Communications Committee (STIBC) as a witness and stated, "While the promotion-related provisions in the AI Basic Act should proceed first, the regulatory parts need a phased application with a grace period of about 2 to 3 years," adding, "Since this is the world’s first full-scale implementation, enforcing regulations first could impose a heavy burden on the entire industry."
From the left, Choi Kyung-jin, Professor of Law at Gachon University, Park Sung-ho, Chairman of the Korea Internet Corporations Association, Lee Sang-hak, Executive Vice President of the Korea Telecommunications Business Association, and Yoo Sang-im, Minister of Science and ICT. Photo by Yonhap News.
The provisions concerning 'high-impact AI' are considered representative regulations. While the European Union (EU) AI Basic Act uses the term 'high-risk AI,' our law defines AI technology used in areas that may significantly affect human life, body, or fundamental rights protection as high-impact AI. The responsibilities of operators regarding high-impact AI include prior inspection and certification (Article 30), prior notification (Article 31), establishment and operation of risk management plans (Article 34), and impact assessment (Article 35).
Choi Kyung-jin, professor of law at Gachon University and president of the Korean Artificial Intelligence Law Association, mentioned, "The problem lies in the uncertainty of concepts like high-impact or 'significant impact.'" Chairman Park also said, "Because defining this (high-impact) is difficult, if regulations are specified in subordinate laws or if various interpretative procedures occur during law enforcement, there are concerns that it will be difficult to respond quickly in a rapidly changing environment."
Voices were also raised that the 'AI watermark' provision needs supplementation in subordinate laws to prevent it from becoming excessive regulation. Article 31 of the AI Basic Act stipulates that when providing products or services using AI, it must be disclosed that they were created by generative AI. Chairman Park said, "Under current law, it is difficult to broadly guarantee exemptions from (notification and labeling obligations), so it could be quite challenging to utilize. I hope this part will be included in subordinate legislation."
Concerns about abuse or misuse of investigative authority were also raised. Article 40, Paragraph 2 of the AI Basic Act stipulates that authorities can visit AI company workplaces to investigate ledgers and documents upon receiving just a complaint or report, which has sparked controversy. This means that competitors could falsely report or the government could conduct on-site investigations based on simple complaints. Chairman Park said, "Although the Ministry of Science and ICT has stated it will use investigative authority only within the minimum scope, please carefully ensure that the rights of operators are not infringed."
Science and ICT Minister Usang-im is reporting on current issues related to artificial intelligence (AI) at the full meeting of the Science, Technology, Information and Broadcasting and Communications Committee held at the National Assembly on the 25th. Photo by Yonhap News
Platform regulations promoted by the Fair Trade Commission were also mentioned. Chairman Park expressed concern, saying, "There is a possibility that attempts to integrate AI technology into services will be fundamentally blocked, not limited to platforms. For example, if Naver’s AI model answers using its own blog or map information as sources, it might not be serviced due to preferential treatment of its own services." He also explained, "It could become a target of tariffs under the Trump administration in the U.S., affecting not only AI but all industries."
At the public hearing, policies for AI industry development were proposed, including ▲ the revival of military service special cases for AI talent ▲ exemption from the 52-hour workweek only for research personnel ▲ vertical integration of the AI ecosystem ▲ expansion of public data disclosure including court rulings ▲ and a tolerant atmosphere that accepts challenges and failures. Jang Jun-young, a lawyer at Sejong Law Firm, said, "The remaining 11 months until the AI Basic Act is enforced next year is a golden time that will determine the leap to becoming a global AI G3 (top 3 global powers). How Korea’s AI model answer is concretized will decide the future."
Meanwhile, the government diagnosed that domestic AI technology is maintaining a gap of more than one year behind the United States, which has the world’s highest level of AI technology. According to a March survey last year by the Institute for Information & Communications Technology Planning & Evaluation (IITP), cited by the Ministry of Science and ICT, the gap compared to U.S. AI technology was 1.3 years for South Korea, 1.5 years for Japan, 1.0 year for Europe, and 0.9 years for China. Minister Yoo Sang-im of the Ministry of Science and ICT said in a briefing, "Although the domestic AI industry is developing its own models, there is still a gap of more than one year compared to the U.S., and we are also falling behind Europe."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

