[Asia Economy Reporter Park Cheol-eung] The National Human Rights Commission of Korea (NHRCK) has officially expressed the opinion that there are possibilities of discrimination, surveillance, and human rights violations behind the development of artificial intelligence (AI), and that the National Assembly should incorporate human rights protection provisions when enacting related promotion laws.
According to the National Assembly on the 1st, the NHRCK recently announced such a decision opinion from its standing committee regarding the "Act on the Promotion of the Artificial Intelligence Industry," which was proposed last November by then-independent lawmaker Kim Kyung-jin.
This bill, which includes provisions for the government to systematically establish basic plans and support the AI industry, failed to pass during the 20th National Assembly and was discarded due to the expiration of the legislative term. However, there is a high possibility that related bills will be reintroduced in the 21st National Assembly.
The NHRCK stated, "It is necessary to reflect basic and general principle provisions related to human rights protection in the AI industry bill," adding, "Principles respecting human rights and human dignity, as well as principles preventing discrimination caused by AI, should be incorporated." It pointed out concerns about machine bias, similar to cognitive bias arising from human subjectivity and unconscious causes. The NHRCK emphasized, "There are concerns that developers' biased ideologies or thoughts are reflected, and that social prejudices or discriminatory factors already embedded in the data AI learns from are being learned."
The NHRCK cited examples from the United States. Cases include an AI-supported applicant screening system developed by the U.S. e-commerce company Amazon that unjustifiably penalized women, and a U.S. sentencing information judgment system that assessed Black defendants as up to 77% more likely to commit violent crimes in the future compared to other races.
Surveillance and personality infringement possibilities are also major concerns. The NHRCK noted, "The risks of deepfake technology, which can completely synthesize or create new videos of specific individuals based on AI, have been pointed out," adding, "According to media reports, 96% of deepfake videos worldwide are produced for pornographic purposes, and Korean female celebrities account for 25% of the victims, which can pose a serious threat to human dignity."
The difficulty in understanding AI's decision-making process is also problematic. The NHRCK explained, "If it is an AI like AlphaGo, which has the simple goal of winning games such as Go, opacity may not be a significant issue, but when making judgments in areas closely related to human real life or human rights, the reason behind such value judgments becomes an important issue."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


