(From left) Professor Hyunil Kim, Department of AI Software, Chosun University, and Minyoung Choi, Master's student. Provided by Chosun University
Chosun University announced on December 15 that the research team led by Professor Hyunil Kim of the Department of AI Software has discovered a new security vulnerability in Federated Learning AI, which is used in everyday applications such as smartphone keyboards, voice assistants, and chatbots.
The core researchers in this study were Professor Hyunil Kim (corresponding author) and Minyoung Choi, a master's student (first author) at Chosun University. They succeeded in implementing a backdoor (unauthorized manipulation) attack that bypasses all existing security technologies and persists for a long time.
'Federated Learning' is a technology designed to protect personal information by allowing a user's smartphone to train AI locally and only send the results to a server. By closely analyzing the internal structure of AI, the research team identified a 'weak point' in certain areas where attacks can easily be embedded.
In particular, they found that specific layers in AI models such as GPT-2 and T5 have a structure that allows externally injected attack signals to persist for an extended period. Since GPT-2 is the predecessor of OpenAI's ChatGPT and shares the same Transformer-based architecture, this vulnerability poses a significant threat to current commercial models as well.
The new attack method developed by the research team, called SDBA, precisely targets these weak points, stealthily embedding attack commands within the AI while disguising them as normal updates. Experimental results showed that SDBA persisted two to three times longer than previous attacks, and in some AI models, the attack effect lasted for up to 565 rounds, demonstrating exceptionally strong persistence.
This means that in real-world service environments, 'conditional malfunction' can be induced, where abnormal responses are triggered only under specific sentences or conditions. Given the widespread use of federated learning-based services such as smartphone keyboards, voice assistants, and chatbots, there is a risk that users may be exposed to distorted information for extended periods without realizing they have been attacked.
The research team explained that it is particularly concerning that even service providers may find it difficult to distinguish between normal updates and attacks, highlighting the limitations of current security systems and the potential for even greater security threats.
Professor Hyunil Kim emphasized, "This research is a very important achievement in revealing which structures in federated learning-based AI are vulnerable to attacks," adding, "AI security has now reached a stage where it is necessary not only to block attacks, but also to understand and protect the internal structure of AI itself."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

