본문 바로가기
bar_progress

Text Size

Close

Naver Unveils AI Safety Practice Framework... Targeting Global Market with Safe AI

The Nation's First AI Safety System Design
Establishment of Risk Mitigation Response System

On the 17th, Naver announced the 'Naver AI Safety Framework (ASF·AI Safety Framework)' through its own technology channel, 'Channel Tech.' This is a response system designed to recognize, evaluate, and manage the potential risks of AI at every stage of developing and deploying AI systems. Designing and implementing an AI safety framework is the first attempt of its kind in South Korea.


Naver Unveils AI Safety Practice Framework... Targeting Global Market with Safe AI Naver Headquarters in Seongnam, Gyeonggi. Photo by Jinhyung Kang aymsdream@

Naver ASF defines the risks posed by AI systems as 'loss of control risk' and 'misuse risk,' and designs methods to address these risks. To mitigate the 'loss of control risk,' where humans can no longer influence AI systems, the framework periodically evaluates and manages AI system risks using the 'AI Risk Assessment Scale.' AI systems with the highest current performance are defined as 'frontier AI,' and risk assessments are conducted every three months for AI systems at this technological level. If a system's capability increases more than sixfold compared to before, an additional assessment is conducted at that time.


For another potential risk defined by Naver ASF, the possibility of 'misuse,' the framework responds with the 'AI Risk Assessment Matrix.' The AI Risk Assessment Matrix manages risks differently depending on the AI system's intended use and the degree of necessity for safety measures. For example, AI systems used for special purposes such as biochemical substance development are provided only to users with special qualifications to mitigate risks. Regardless of the intended use, if an AI system requires high safety measures, it will not be deployed until risks are mitigated through additional technical and policy safety measures.


Naver plans to develop the Naver ASF into an AI safety framework that reflects cultural diversity. It will jointly develop 'Sovereign AI' with governments and companies outside Korea. Sovereign AI refers to building independent AI capabilities by utilizing a country’s or company’s own infrastructure and data. The framework will also enhance benchmarks that identify risks of AI systems that may arise within specific cultural spheres and measure the degree of risk by reflecting the characteristics of those cultures.


Choi Soo-yeon, CEO of Naver, said, "We plan to continuously improve the Naver ASF while developing Sovereign AI in the global market in the future." She added, "Through this, Naver will actively contribute to a sustainable AI ecosystem where multiple AI models reflecting the cultures and values of various regions are safely used and coexist."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top