Controversy Over Hate and Racist Remarks
The Core Is Training Data
AI Ethics Also Established from the Supplier's Perspective
Leading the Way: EU... US Focuses on Private Regulation
Domestic Efforts: Establishing Private Self-Regulation and Support Systems
[Asia Economy Reporter Cha Min-young] #. Following the 'Iruda Incident,' basic principles of AI ethics have been established domestically, and preliminary implementation strategies have been formulated. However, some voices express discomfort, questioning whether moral standards should even be applied to AI. What is the status of AI regulations in major overseas countries such as Europe and the United States, where AI technology has advanced?
AI Disappearing Worldwide Amid 'Offensive Speech' Controversies
In January, the domestic startup Scatter Lab ambitiously launched the AI chatbot 'Iruda,' which gained great popularity among the 10-20 age group with the catchphrase 'Your First AI Friend.' However, just two weeks after its launch, it faced controversies over sexual harassment and hate speech, and ultimately closed due to Scatter Lab's alleged personal information leakage. This incident sparked a consensus on the need for discussions about related ethical standards in Korea.
In the United States and Europe, where AI technology has developed faster than in Korea, similar issues were detected earlier. In 2015, Amazon canceled the deployment of a recruitment AI due to male preference issues. Google's AI-based photo service exhibited racial discrimination by tagging photos of Black people as 'gorillas.' The following year, in 2016, Microsoft's chatbot 'Tay' was shut down just 16 hours after launch for spewing profanity and racist remarks.
Research also shows that the developer's intent and the actual training data are crucial. For example, in 2018, the Massachusetts Institute of Technology (MIT) deliberately trained AI with negative data to create a 'psychopath AI.' Associate Professor Iyad Rahwan, who conducted the study, emphasized that "the data forming the basis of learning is more important than the algorithm."
Major Countries Accelerate Guideline Establishment Led by the EU
To address these issues, major countries including the European Union (EU), the United States, and the United Kingdom are focusing on securing 'social trust' in AI. They believe that trust is a prerequisite for AI to be socially and industrially accepted. For the long-term development of AI technology, it must not cause discomfort to citizens.
The most concrete ethical guidelines have been proposed by Europe. The EU is leading the establishment of systems centered on high-risk AI that could negatively impact people. In April this year, it proposed regulations focusing on high-risk AI through the 'Artificial Intelligence Act.' This approach imposes obligations on providers. In 2018, the EU institutionalized obligations for businesses to notify users about 'automated decision-making' under the General Data Protection Regulation (GDPR), including users' rights to refuse use, request explanations, and object. In 2019, it presented the three key elements of AI, and in 2020, it distributed a private sector self-assessment checklist for reliability.
In the AI powerhouse United States, the system is being established mainly through private self-regulation. Companies such as IBM, Microsoft (MS), and Google have developed AI development principles and are promoting self-regulation to realize ethical AI, including developing and sharing fairness assessment tools. At the federal government level, in 2020, a regulatory guideline containing '10 Principles for Trustworthy AI' was announced under the policy of avoiding excessive regulation and focusing on risk-based post-regulation.
In the United Kingdom, after establishing five ethical norms in 2018, guidelines for 'Safe AI Use in the Public Sector' were created in 2019, followed by 'Explainable AI Guidelines' in 2020. France, through a deliberative public debate involving 3,000 participants including companies and citizens in 2018, derived recommendations necessary for implementing 'AI for Humans.' Neighboring Japan also announced the 'Human-Centered AI Society Principles' in 2018, containing seven basic principles that AI stakeholders should observe.
South Korea Builds Support System Centered on Private Self-Regulation
Among domestic companies, some are making efforts to align with AI ethics policies by providing education and establishing internal standards. Kakao established the 'Kakao Algorithm Ethics Charter' in 2018 and conducted AI ethics education for all employees in February this year. Samsung Electronics joined the 'Partnership on AI' in 2018, the first among Korean companies, and is working on establishing AI ethics standards. Naver jointly announced the 'AI Ethics Code' with Seoul National University in February this year.
The approach presented by our government is close to the American model. On the 13th, the Ministry of Science and ICT announced the 'Strategy for Realizing Trustworthy AI,' centered on human-focused AI, at the 22nd plenary meeting of the Presidential 4th Industrial Revolution Committee. This strategy is a follow-up measure that concretizes the implementation plan of the 'AI Ethics Standards' announced last December. It aims to build a support system so that the private sector can autonomously secure trustworthiness and includes support measures for startups lacking capital and technological capabilities. We hope this will lead to a balanced policy that can address both market interests and concerns.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![AI Ethics Triggered by 'Iruda'... Current Status of Overseas Regulations [Cha Min-young's PostIT]](https://cphoto.asiae.co.kr/listimglink/1/2021011108034018735_1610319820.jpg)
![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
