본문 바로가기
bar_progress

Text Size

Close

Samsung, Naver, and Others "Will Create Trustworthy AI"... Public-Private Discussions on Safety

Ministry of Science and ICT 'AI Trust and Safety Conference'
Naver, Kakao, SKT and 7 Companies Share Implementation Status

The public and private sectors have united to promote the development of safe and reliable artificial intelligence (AI). On the 26th, the Ministry of Science and ICT held the 'AI Trustworthiness and Safety Conference' at the War Memorial of Korea in Seoul, attended by over 200 participants including leading domestic AI conglomerates, startups, and researchers.

Samsung, Naver, and Others "Will Create Trustworthy AI"... Public-Private Discussions on Safety

At this conference, participants shared trends in AI trustworthiness and safety-related technologies and policies, reviewed key achievements from government-supported research this year, and assessed the implementation status in the private sector.


Professor Yoshua Bengio of the University of Montreal, a world-renowned scholar in the AI field, emphasized in his keynote speech that "effective management of risks associated with cutting-edge AI models requires harmonization between domestic laws and international agreements."


He added, "From a scientific perspective, alignment and control of AI models are critical issues," and stated, "It is necessary to expand government support and the role of AI safety research institutes to secure quantitatively measurable AI model risk assessment and risk management technologies."


Oh Hye-yeon, Director of the KAIST AI Research Institute, shared trends in the international AI hegemony competition and explained the importance of AI as a strategic asset.


During the event, six domestic companies that participated in the 'AI Seoul Corporate Pledge' in May shared their implementation status regarding risk management plans, technology research, and internal governance to develop safe and reliable AI. The participating companies are Samsung Electronics, Naver, Kakao, SK Telecom, KT, and LG AI Research.


These companies pledged to continue deeply recognizing their responsibilities related to the AI products and services they provide and to further strengthen efforts to ensure trust and safety in the future.


At the conference, the Telecommunications Technology Association (TTA) introduced seven major risks identified by analyzing large-scale participant attack attempts, along with various attack techniques such as denial of service neutralization and confusion induction. The main risks mentioned included misinformation, bias and discrimination, illegal content generation and information provision, jailbreak, cyberattacks, infringement of individual rights, and consistency.


The Ministry of Science and ICT plans to utilize the results of the Red Team Challenge to establish a safety framework for generative AI and support preemptive identification of potential risks in generative AI models to ensure trust and safety.


On the same day, the '2nd AI Trustworthiness Awards' ceremony was held to spread awareness of the importance of AI trustworthiness across the industry and to support the promotion of excellent AI products and services.


Song Sang-hoon, Director of the Information and Communication Policy Office at the Ministry of Science and ICT, stated, "While strengthening policy support to spread a culture of responsible AI development and utilization based on private sector autonomy, we will launch an AI Safety Research Institute to prepare for potential risks from advanced AI and respond systematically at the national level."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top