Concerns are growing over cyberattacks and personal information leaks using generative artificial intelligence (AI) technologies such as ChatGPT. Experts emphasize the need for proactive government regulations and the establishment of a public-private cooperative ecosystem to build a response system.
Professor Sang-Geun Lee of Korea University Graduate School of Information Security pointed out at the "Generative AI Security Threat Response Measures Forum" hosted by the Ministry of Science and ICT on the 13th, "When asked to generate attack codes exploiting program vulnerabilities using generative AI, it initially refuses, but if the question is circumvented, it not only creates the attack codes but also explains the principles of the attack."
He added, "There are even reports that sensitive personal information such as names, phone numbers, email addresses, and SNS conversation histories can be extracted from the data used to train OpenAI's GPT2 model."
Jung-Hee Kim, Director of Future Policy Research at the Korea Internet & Security Agency (KISA), also expressed concern, saying, "The entry barrier to acquiring hacking skills has significantly lowered with the emergence of generative AI services. More than 74% of cyber threats in the domestic public sector are in the form of phishing email attacks, and if ChatGPT can write phishing emails that are more sophisticated and intimate than humans, the scale of damage will be much greater."
Companies also sympathized with these expert concerns and shared their thoughts on response measures. Hwan-Seok Park, Security Planning Manager at KT, said, "We are discussing ways to use a dedicated ChatGPT space for Microsoft and KT," adding, "We are guiding employees to refuse ChatGPT’s retraining and reuse of input data."
Gyu-Bok Kwak, Security Business Manager at LG CNS, said, "Companies are responding by allowing ChatGPT to be used only in closed environments or by applying monitoring tools to filter content."
Security companies suggested active research on security technologies incorporating generative AI. They also emphasized the need to build a cooperative ecosystem between the private sector and government. Il-Ok Jung, Head of the Control Technology Research Team at Igloo Corporation, said, "ChatGPT is a double-edged sword that can become a cyber threat or a tool to strengthen security depending on how it is used. If a specialized sLLM model for the security field is created and serviced, it is possible to build an on-premise AI that can be used without worries about data leakage. To achieve the goal of improving work efficiency and strengthening security through AI, collaboration among industry, academia, and research institutes is necessary."
Doo-Sik Yoon, CEO of Jiransoft Security, also said, "If response measures are not developed through cooperation among the Ministry of Science and ICT, police, telecommunications companies, and academia, a major incident will occur," emphasizing the need for active consideration.
Yoon-Yoo Park, Vice Minister of the Ministry of Science and ICT, stated, "Generative AI technologies such as ChatGPT will be widely and universally used in our daily lives, so it is necessary to actively respond to security threat concerns," adding, "The government will do its best to create a safe cyber environment so that the public can use generative AI services with confidence and to build response capabilities against increasingly intelligent and sophisticated cyber threats."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


