Discussion on Strengthening AI Governance Cooperation
"Korean Version of the AI Risk Taxonomy" Unveiled
"AI Ethics MOOC Project" Also Introduced
LG AI Research Institute participated in the India AI Impact Summit and took the lead in discussions on global artificial intelligence (AI) governance.
On the 20th (local time), LG AI Research Institute attended the India AI Impact Summit held at Bharat Mandapam in New Delhi, India, where it shared plans for global collaboration and implementation outcomes for building a responsible AI ecosystem. Following invitations to the AI summits in Seoul and Paris, and now India this year, LG AI Research Institute has been conveying the voice of Korea's AI industry and participating in global AI governance discussions for three consecutive summits.
At this summit, Kim Yucheol, Head of Strategy at LG AI Research Institute, took part in a session jointly hosted by UNESCO and the Office of the United Nations High Commissioner for Human Rights (OHCHR), where he held in-depth discussions with key figures from international organizations, academia, and industry, including Google, Microsoft (MS), the National Association of Software and Service Companies (NASSCOM), and the World Benchmarking Alliance, on embedding Responsible AI policies within companies and the role of global standards.
In particular, Kim unveiled the Korean version of the universal AI risk taxonomy, K-AUT (Korea-Augmented Universal Taxonomy), developed by LG AI Research Institute.
Kim Yucheol, Head of Strategy at LG AI Research, speaking at the India AI Summit held on the 20th (local time) at Bharat Mandapam in New Delhi, India. From the left in the photo: Peggy Hicks, Director, Office of the United Nations High Commissioner for Human Rights; Alexandria Walden, Global Head of Human Rights Policy, Google; Kim Yucheol, Head of Strategy, LG AI Research; Hector de Rivoire, Head of Public Policy, Office for Responsible AI, Microsoft; Ankit Bose, Head of AI, National Association of Software and Service Companies (NASSCOM); Namit Agarwal, Head of Programs, World Benchmarking Alliance. Photo by LG Co., Ltd.
Kim explained, "LG's AI risk taxonomy is designed on the fundamental basis of universal human values such as the Universal Declaration of Human Rights, while also being developed to encompass risks that are difficult to capture with universal principles alone. To this end, it reflects the legal, social, and cultural specificities of Korean society, and covers potential future risk factors such as collusion among multiple AI agents and circumvention of AI safety mechanisms."
The Korean version of the universal AI risk taxonomy categorizes potential risks into four core domains and 226 detailed risk items: universal human values, social safety, Korean specificities, and future risks. It also provides five concrete criteria for determining each item, and even a single violation is sufficient for an AI response to be classified as inappropriate.
LG AI Research Institute has developed this new risk taxonomy as a tool to test and strengthen the safety of AI models and AI services. It has already been applied to verify the safety of LG's AI foundation model EXAONE, and the results are being transparently disclosed. The section on Korean specificities is designed so it can be replaced with risk items that reflect the unique characteristics of each country and region, enabling the framework to be extended and applied globally in the future.
Peggy Hicks, Director at the Office of the United Nations High Commissioner for Human Rights, who attended the event, said, "LG has presented an approach that is firmly grounded in universal human rights values while also being able to reflect the context of specific societies and cultures, which is exactly the direction we are aiming for," adding, "What is truly needed at this moment is to turn principles into practical tools that can be applied in the field right away, and to share that experience so that other countries and regions can make use of it."
LG AI Research Institute also introduced its "AI Ethics MOOC (Massive Open Online Course) Project," which is scheduled for global release in May. This global project, pursued jointly by LG AI Research Institute and UNESCO, aims to strengthen AI ethics capabilities in both the public and private sectors by identifying best practices for developing and using AI technologies responsibly, and by designing and providing educational programs for AI experts, researchers, and policymakers around the world.
This project is particularly meaningful in that it transforms AI ethics principles, which have often remained at the level of abstract discourse, into practical knowledge that can be applied and implemented immediately in real-world settings. To this end, LG AI Research Institute has transparently disclosed its hands-on operational know-how and AI technologies, including its independently developed ethics impact assessment and data compliance AI agents.
From a pool of more than 450 applicants from 39 countries, 10 global experts were selected, and the lectures were structured into 10 modules covering global key agendas in AI ethics, from basic concepts to safety, fairness, sustainability, and governance. An international advisory board of 15 leading scholars from world-renowned AI research institutions, including Harvard University, New York University, the University of Notre Dame, the United Nations University, the Mozilla Foundation, and the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), is also being operated.
Last year, LG AI Research Institute held the "Global Best Practices in AI Ethics Contest." More than 120 exemplary cases of AI ethics in practice were submitted by governments, companies, and civil society organizations from 37 countries, and these best practices have been integrated into the curriculum so that developers and policymakers around the world can benchmark them in ways that fit their own circumstances.
Tim Curtis, Director of the UNESCO New Delhi Office for South Asia, said, "The core of this MOOC is 'Ethics by Design.' Rather than asking ethical questions only after problems arise, we must embed these questions into the development process from the very beginning. In particular, since AI is shaped by language, cultural norms, and the institutional capacities of each society, a one-size-fits-all approach has its limits. Only by integrating diverse contexts and perspectives can ethical principles move from theory to reality."
He went on to emphasize, "We hope that the MOOC project will become a platform for capacity building that helps many more people around the world develop and use AI in ways that are responsible, inclusive, and aligned with public trust."
Kim Myungshin, Policy Principal at LG AI Research Institute, said, "This project serves as a bridge that translates the principles of global standards in AI ethics into the language of the field, and we expect it to become a practical milestone for experts around the world who feel at a loss when it comes to implementing AI ethics."
LG AI Research Institute and UNESCO plan to hold a launch event for the "AI Ethics MOOC" in Seoul this May. AI Ethics MOOC will be available free of charge to anyone through Coursera, a global online education platform.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

