Gaebowi, Confirming Whether Korean User Information Was Leaked
OpenAI Considering ChatGPT Usage Restrictions for Minors
Calls to "Improve Through Utilization" Amid Global Regulatory Measures
As countries such as Italy and Canada move to regulate ChatGPT, the South Korean government is also examining whether any personal information has been leaked. They plan to verify if domestic users were included in the personal information exposed by ChatGPT recently and to investigate how personal data was utilized in the training data.
On the 6th, the Personal Information Protection Commission (PIPC) is checking whether payment information of paid ChatGPT users in South Korea was exposed. On the 20th of last month, ChatGPT experienced a program error that caused some users' information to be exposed to other users. Among paid ChatGPT Plus members, 1.2% had their names, email addresses, last four digits of credit cards, and expiration dates exposed.
A PIPC official stated, "If there are domestic users involved, we will thoroughly examine whether South Korea's Personal Information Protection Act can be applied to the overseas business operator OpenAI." After completing the relevant verification, the PIPC plans to investigate ChatGPT's training data as well. They are reviewing the scope of the investigation, including how domestic user data is utilized in the training data and whether there are any issues under the Personal Information Protection Act.
Overseas, moves to regulate ChatGPT are gaining momentum. Italy blocked access to ChatGPT on the 31st of last month, citing personal information protection concerns, and launched its own investigation. They are examining whether ChatGPT provided inappropriate responses to minors without verifying user age and whether personal information was used without authorization. Following this, Canada and France have initiated investigations into unauthorized personal data collection. Germany has requested investigation information from Italy and hinted at possible ChatGPT blocking measures.
OpenAI, the US startup that developed ChatGPT, has taken steps to evolve. On the 5th, it introduced measures to build a safe artificial intelligence (AI) system through its blog. Regarding the protection of minors, it announced that it is considering usage restrictions. They are exploring methods to verify that users are either 18 years or older or at least 13 years old with parental consent to use the AI tools.
OpenAI also stated that it is committed to protecting personal information. According to OpenAI, ChatGPT was trained on content available on the internet. However, some of the data ChatGPT learned from may have included publicly available personal information. In such cases, personal information is deleted or the AI model is fine-tuned to refuse requests that ask for personal information. This is intended to minimize the possibility that ChatGPT will respond with personal data.
They emphasized that utilization is more necessary than regulation to create safer AI. AI needs to learn from various real-world situations to improve. OpenAI explained, "Although we improved safety for more than six months before releasing GPT-4, we cannot predict all risks," adding, "We need to expose it to more people to monitor misuse and take action."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


