Ministry of Science and ICT Announces Draft Enforcement Decree of AI Basic Act
Pursuing Both Transparency and Easing Corporate Burdens
Going forward, when businesses provide products or services using generative artificial intelligence (AI) or high-impact AI, they must inform users in advance that AI is involved. For outputs that are difficult to distinguish from reality, such as deepfakes, it must be clearly labeled as "AI-generated content." Additionally, factors such as the user's age and physical condition must also be taken into consideration.
High-impact AI refers to AI systems that can have a significant impact on life, safety, or fundamental rights, and includes technologies used across society in areas such as finance, healthcare, education, and employment. The government will determine whether an AI system qualifies as high-impact AI by comprehensively assessing the area of use, the severity of risk, and the potential for infringement of fundamental rights. If a business requests confirmation, the government will notify them of the results within 30 days.
High-impact AI operators are required to publicly disclose key measures such as risk management, explainability, user protection, supervision, and documentation. Trade secrets, however, are exceptionally protected.
Full-Scale Legislation for Ensuring AI Transparency and Reliability
The Ministry of Science and ICT announced on November 12 that it has prepared a draft enforcement decree for the "Basic Act on the Promotion and Trust-building of Artificial Intelligence (hereinafter referred to as the AI Basic Act)" and will release it for public consultation. The public consultation period will run for 40 days, from November 12 to December 22, during which a wide range of opinions will be collected from industry, civic groups, experts, and relevant ministries.
The AI Basic Act was passed by the National Assembly last year through bipartisan agreement and is scheduled to take effect on January 22 next year. The enforcement decree aims to specify the details delegated by the law and establish an institutional framework to balance innovation in the AI industry with the creation of a foundation of trust.
The Ministry of Science and ICT released a draft at the National AI Strategy Committee in September and collected feedback. Through this public consultation, additional opinions from the field will be incorporated.
Minimizing Overlapping Regulations... Industry-Friendly System Design
The draft enforcement decree focuses more on promotion than regulation. The Ministry of Science and ICT has coordinated with relevant ministries such as the Ministry of Food and Drug Safety, the Financial Services Commission, the Nuclear Safety and Security Commission, and the Personal Information Protection Commission to ensure that if businesses have already fulfilled the same obligations under other laws, the responsibilities under the AI Basic Act will not be redundantly applied.
For example, in the medical field, if the safety and reliability standards of the Digital Medical Devices Act are met, and in the financial field, if the standards of the Electronic Financial Transactions Act are satisfied, no additional duplicate verification will be required. This measure is intended to reduce administrative burdens on businesses and encourage autonomous innovation in the industry.
The enforcement decree also specifies the basis for supporting the systematic development of the domestic AI industry. The scope and criteria for projects to expand the AI ecosystem-such as AI research and development (R&D), building datasets for training, technology adoption and utilization, startup support, nurturing professionals, promoting convergence between industries, and supporting overseas expansion-have been clarified.
Additionally, the procedures for designating and operating "AI convergence clusters" to foster regional industries have been formalized, and the role of a dedicated organization to provide comprehensive support has also been stipulated.
The policy implementation system has also been detailed. Criteria have been established for designating and operating institutions responsible for executing national AI policies and ensuring safety and trust, such as the AI Policy Center, the Artificial Intelligence Safety Research Institute, and dedicated organizations for AI convergence clusters. The AI Policy Center will handle international norms and policy research, the Artificial Intelligence Safety Research Institute will be responsible for verifying technical safety and ensuring trust, and the dedicated organizations will manage cluster operations and support businesses.
Safety Management Standards for Ultra-High-Performance AI... "10²? FLOPs or More"
The enforcement decree also introduces new safety standards for artificial intelligence. High-performance AI systems subject to safety obligations are defined as those with a cumulative computational workload of at least 10 to the 26th power (10²?) floating point operations (FLOPs) used for training.
FLOPs, or "Floating Point Operations Per Second," is an indicator representing the total amount of calculations performed by an AI model during training. This figure shows how much data an AI can learn from and how complex a model it can operate.
10²? FLOPs or more is recognized internationally as the technical threshold for "ultra-large AI." This standard was set by synthesizing the EU AI Act (10²?) and California’s Frontier AI Transparency Law (10²?). The Ministry of Science and ICT has indicated that the standard may be adjusted in the future, taking into account technological advancements and risk levels.
Detailed 'AI Impact Assessment' for the Protection of Fundamental Rights
The AI impact assessment system, designed to preemptively examine the effects of artificial intelligence on individuals' rights and society, has also been detailed. The assessment includes the groups affected (such as students, patients, job seekers), the fundamental rights impacted (such as the right to life, equality, and education), the nature and scope of the impact, evaluation indicators, and improvement plans. The system is characterized by its focus on self-assessment, allowing companies to independently identify risk factors and reduce side effects.
Deputy Prime Minister and Minister of Science and ICT, Paek Jonghun, is responding to questions from ruling and opposition committee members during the National Assembly Science, ICT, Broadcasting and Communications Committee’s audit of the Ministry of Science and ICT held last month at the Government Complex Sejong. Photo by Yonhap News Agency
To help businesses adapt in the early stages of implementation, the government will operate a guidance period during which the imposition of fines will be suspended for at least one year. During this period, a "Comprehensive AI Basic Act Support Center" (tentative name) will be established to assist companies experiencing difficulties in complying with the law and fulfilling their obligations. The center will provide consultation on legal interpretation, guideline application, and impact assessment methods, helping companies avoid misunderstandings or unnecessary burdens regarding the regulations.
Additionally, the government plans to support costs necessary for AI testing, certification, and impact assessments, and will also operate expert consulting programs on transparency obligations and responsibilities for high-impact AI operators. The government intends to continuously improve laws and guidelines by reflecting feedback from the industry during the operation of the system.
Deputy Prime Minister and Minister of Science and ICT, Paek Jonghun, stated, "The enforcement decree of the AI Basic Act will serve as the institutional foundation for Korea to become one of the top three AI powerhouses (G3). During the public consultation period, we will fully reflect the opinions of the industry and the public to balance the dual goals of AI industry development and ensuring safety and trust."
The draft enforcement decree released for public consultation can be found on the Ministry of Science and ICT website (in the 'Legislation and Administrative Notice' section), and opinions can be submitted via email or mail until December 22.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



