The University of Seoul announced on July 9 that a paper titled 'One-Shot is Enough: Consolidating Multi-Turn Attacks into Efficient Single-Turn Prompts for LLMs,' co-authored by Ha Junwoo, a third-year undergraduate student in the Department of Mathematics, has been accepted for the main track of ACL 2025 (Association for Computational Linguistics), the world's most prestigious conference in the field of natural language processing (NLP).
ACL is the most authoritative international conference in computational linguistics and natural language processing, and it is the event that attracts the most attention from AI researchers around the world.
The official presentations are scheduled to take place from July 28 to July 30 at the Austria Center Vienna in Vienna, Austria.
This paper has drawn particular attention as a rare achievement, with a third-year undergraduate student being listed as a main track author.
This research introduces a novel approach that compresses multi-turn attacks on large language models (LLMs) into a single turn, achieving both a 95.9% attack success rate and up to 80% reduction in token usage.
This accomplishment is recognized as systematically identifying the potential security vulnerabilities of LLMs and establishing new standards for the development of safer AI.
While previous studies have mainly used multi-turn prompt strategies to increase attack efficiency, Ha Junwoo demonstrated that compressing these into a single turn can reproduce the same or even greater attack threats.
The research team's 'M2S (Multi-turn-to-Single-turn)' framework is a technology that converts complex conversations into a structured single prompt using a three-stage strategy: hyphenation, numerization, and Pythonization.
Experimental results showed a 95.9% attack success rate on the Mistral-7B language model, which is a 17.5 percentage point improvement over GPT-4o. Additionally, the framework achieved a 70?80% reduction in token usage, significantly reducing the computational resources required to achieve the same objectives.
Ha Junwoo commented, "This was a hands-on research experience that I gained while balancing my studies," adding, "The process of defining and solving AI safety issues together with co?first author Kim Hyunjun, while moving between a startup and the university, has become a great asset."
He further explained, "If the same threat can be reproduced with just a single line of input, then defense systems must also be able to pass 'single-line verification.' The significance of this research lies in demonstrating that the single-turn prompt?based attack model offers the potential for lightweight security evaluation frameworks."
The paper demonstrates that existing LLM security systems can be easily bypassed with single-turn inputs, strongly indicating the need to overhaul current security evaluation methods and defense strategies. Ha Junwoo plans to continue expanding his research, focusing on the field of AI security.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



