The ‘Artificial Intelligence (AI) Act’ approved by the 27 member states of the European Union (EU) is an AI regulatory bill focusing on restrictions on biometric data collection and strengthening transparency obligations.
Three years ago, the EU Commission proposed the bill, and after drafting it in June last year, it passed the crucial trilateral negotiations among the Council, the Commission, and the European Parliament in December of the same year, which is the most important gateway in the EU legislative process. Subsequently, on the 2nd of this month (local time), the final compromise was approved at the meeting of resident ambassadors from the 27 EU countries, leaving only the European Parliament’s approval procedure. After the responsible committee of the European Parliament votes on the 13th, if it passes the plenary session in March or April, the world’s first AI regulatory law will be born.
The AI Act, finalized by the 27 EU countries, centers on classifying technologies according to risk levels and imposing fines on companies that do not comply with the regulations.
First, AI technologies are classified into four grades based on risk levels. According to this classification, facial recognition technology is categorized as the highest risk level and is effectively banned. This is due to concerns that indiscriminate collection and use of facial recognition data, which is the most sensitive personal information, could infringe on individual privacy and pose serious threats to data security. However, exceptions are allowed for national security, criminal investigations, and security purposes. Cases affecting democracy, such as elections, are also classified as high-risk AI technologies.
The use of AI for so-called ‘social scoring,’ which quantifies the social influence of individuals and companies on social media and other platforms, is also prohibited. This is to prevent the misuse of AI to artificially boost social recognition on the internet.
Additionally, the AI Act clearly defines the regulatory targets as ‘high-risk AI’ and ‘general-purpose AI,’ requiring mandatory reporting. Companies using high-risk technologies such as autonomous vehicles or medical devices must disclose AI-related data, undergo strict testing, and large language models (LLMs) like OpenAI’s GPT and Google’s Gemini are subject to transparency obligations, including compliance with EU copyright law and distribution of summaries of the content used for training.
Companies violating these AI Act regulations face fines of up to 35 million euros (approximately 50 billion KRW) or 7% of their global revenue.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[News Terms] Facial Recognition Restricted, World's First Regulatory Law 'AI Act'](https://cphoto.asiae.co.kr/listimglink/1/2023121110332984044_1702258410.jpg)

