본문 바로가기
bar_progress

Text Size

Close

Personal Information Protection Commission Holds 'Open Source Day' at GPA Seoul... Discusses AI and Privacy

Google, Meta, OpenAI, Naver, and Others Participate

The Personal Information Protection Commission announced on the 15th that it held "Open Source Day" as a pre-event to the upcoming Global Privacy Assembly (GPA) scheduled for the 16th.


The event was attended by over 120 participants, including global open source model and solution companies such as Google, Meta, Microsoft, OpenAI, SelectStar, and AIM Intelligence, as well as domestic AI companies, researchers, and overseas supervisory authorities, who discussed the open source AI ecosystem and privacy issues.


Personal Information Protection Commission Holds 'Open Source Day' at GPA Seoul... Discusses AI and Privacy Choi Janghyuk, Vice Chairman of the Personal Information Protection Commission, is delivering a welcoming speech at the "Open Source Day" held on the afternoon of the 15th at the Grand Hyatt Seoul in Yongsan-gu, Seoul. Photo by Personal Information Protection Commission

Earlier this year, the Personal Information Protection Commission held a meeting with AI startups to discuss privacy guardrails for the open source ecosystem. Between last year and early this year, the commission conducted a preliminary survey of major large language models (LLMs), including open source models, and identified privacy risks in the open source AI environment.


Based on these efforts, the commission has detailed risk management and responsibility allocation measures within the open source ecosystem, publishing the "AI Privacy Risk Management Model" and the "Guide to Personal Information Processing in Generative AI."


According to a brief survey conducted by the commission prior to the event, about 62% of the 70 participating developers, researchers, and company representatives had experience adopting or utilizing open source. Additionally, 77% responded that they had considered safety when fine-tuning open source models.


At the event, global open source AI companies presented their respective open source ecosystems and real-world application experiences. Google introduced Vertex AI, a platform for cost-effective open source model operations, and shared ways to use tools for reliability and safety, such as LLM quality evaluation tools, prompt optimization features, and safety enhancement tools.


AIM Intelligence shared its experiences and challenges regarding safety and information security encountered by companies when operating AI services for customers and utilizing AI models for internal work. The company also received the "Llama Impact Innovation Award" by advancing Meta's open source AI filtering model "Llama Guard" to suit the Korean context.


Microsoft presented customer case studies of building agent AI based on Azure AI Foundry and highlighted the potential for using open source models and tools in building agent AI. OpenAI introduced its newly released open source models (gpt-oss-20b/120b) and addressed challenges in the spread of open source, such as the economic and social value of open source models, concerns about safety and responsibility, and the need for global-level discussions.


Naver introduced tools for open source utilization, including its open source model HyperCLOVA X, public datasets, benchmarks, and an AI safety framework. SelectStar presented its AI reliability verification solution (DATUMO Eval), developed using open source models and technologies, and shared how it contributed to the expansion of the open source ecosystem in the AI data and reliability business process.


During the subsequent live Q&A session, participants discussed on-site challenges and solutions, including difficulties encountered in adopting open source and privacy-related concerns. The discussion covered ensuring safety and reliability in the use of open source, such as filtering and verifying personal and sensitive information, considerations during fine-tuning, and designing red team tests.


The final session featured a roundtable with data protection authorities from four countries: South Korea, the United Kingdom, Italy, and Brazil. The authorities discussed privacy considerations in the open source AI ecosystem and agreed on the need for trustworthy AI implementation.


Choi Janghyuk, Vice Chairman of the Personal Information Protection Commission, stated, "This Open Source Day is highly significant as it marks the first public forum in Korea to discuss both the open source AI ecosystem, which underpins innovative services like agent AI, and personal information protection. We will actively reflect the voices from the field in our policies so that companies and researchers can utilize open source with confidence."


Kim Huikang, Non-standing Commissioner of the Personal Information Protection Commission, who attended the event, said, "The open and sharing culture of open source is accelerating the spread of cutting-edge technologies and driving innovation across various industries and society as a whole. I hope this will serve as a meaningful starting point for jointly pursuing open and trustworthy AI development."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top