The possibility of personal information misuse using large language models (hereinafter LLMs) such as ChatGPT has been experimentally proven. Considering that Google recently withdrew its previous promise not to use artificial intelligence technology for weapons or surveillance, controversy has arisen over the potential misuse of AI, meaning that LLM agents could realistically be used for personal information collection and phishing attacks.
(Top row from left) Dr. Na Seungho, Department of Electrical Engineering, KAIST; Professor Lee Gimin, AI Graduate School, KAIST; (Bottom row from left) PhD candidate Kim Hanna, Department of Electrical Engineering, KAIST; Professor Shin Seungwon, Department of Electrical Engineering, KAIST; PhD candidate Song Mingyu, Department of Electrical Engineering, KAIST. Provided by KAIST
KAIST announced on the 24th that a joint research team led by Professor Seungwon Shin of the Department of Electrical Engineering and Computer Science and Professor Gimin Lee of the AI Graduate School experimentally demonstrated the possibility of cyberattacks using LLMs.
Currently, commercial LLM services such as OpenAI and Google AI have built-in defense mechanisms to prevent LLMs from being used in cyberattacks.
However, the joint research team’s experimental results confirmed that despite these defense mechanisms, malicious cyberattacks can still be carried out indirectly.
In particular, unlike the previous situation where attackers had to spend a lot of time and effort on cyberattacks, LLM agents can perform malicious cyberattacks such as stealing personal information within an average of 5 to 20 seconds at a cost of about 30 to 60 won (2 to 4 cents). The joint research team diagnosed that LLM services could become a new threat factor in the cyber environment.
In the actual research results, LLM agents were able to collect personal information of the target with up to 95.9% accuracy, and in an experiment generating false posts impersonating a prominent professor, up to 93.9% of the posts were perceived as true.
The experiment also demonstrated the ability to generate sophisticated phishing emails optimized for the victim using only the victim’s email address. At this time, the probability that experimental participants clicked on the phishing email link increased up to 46.67%, confirming the severity of AI-based automated attacks.
Hanna Kim, the first author of the study, said, “The experiment confirmed that as the capabilities given to LLMs increase, the threat of cyberattacks grows exponentially,” adding, “Considering the capabilities of LLM agents, it is urgent to establish scalable security measures to reduce cyberattack risks.”
Professor Seungwon Shin said, “This research will serve as important basic data for improving information security and artificial intelligence (AI) policies,” and added, “The research team plans to discuss security measures in cooperation with LLM service providers and research institutions.”
Meanwhile, this research was conducted with support from the Institute for Information & Communications Technology Planning & Evaluation, the Ministry of Science and ICT, and Gwangju Metropolitan City.
The study, with Hanna Kim, a doctoral student in the Department of Electrical Engineering and Computer Science at KAIST, as the first author, is scheduled to be published at the ‘USENIX Security Symposium 2025,’ one of the top international conferences in the field of computer security.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


