본문 바로가기
bar_progress

Text Size

Close

[News Terms] The Malicious Version of ChatGPT, 'WormGPT'

Designed to Generate Human-Like Text
Impersonating Executives to Steal Corporate Funds via Fake Emails
Concerns Over the Popularization of Hacking Crimes Due to Easy Use by Beginners

As generative artificial intelligence (AI) technologies such as ChatGPT advance, cybercrimes utilizing these technologies are becoming more sophisticated. With cyber threats exploiting generative AI expected to intensify starting this year, a malicious AI called 'WormGPT' is emerging into the spotlight.


WormGPT, a malicious version of ChatGPT, is an AI model developed for the purpose of conducting phishing or BEC attacks. BEC (Business Email Compromise) attacks are cyber fraud crimes based on fake emails targeting businesses.

[News Terms] The Malicious Version of ChatGPT, 'WormGPT' WormGPT [Photo by Hacking forum]

WormGPT is designed to generate human-like text, helping hackers carry out malicious attacks. For example, a hacker impersonates an executive of a specific company and sends fake emails to finance department staff, requesting bank transfers or changes to bank information to steal funds or CEO information.


The company that first discovered WormGPT is SlashNext, a U.S.-based email security firm. In July 2023, SlashNext announced that it had found WormGPT, a generative AI crime tool, on a hacking forum. SlashNext analyzed that WormGPT is a chatbot similar in format to ChatGPT and is designed to facilitate cybercrimes such as personalized phishing and BEC attacks.


WormGPT was created based on GPT-J, an open-source language model released in 2021 by the AI research organization EleutherAI, and it was found to have been trained intensively on malware-related data. Unlike other generative AIs, WormGPT has all ethical safeguards that limit responses to malicious requests completely removed.


Daniel Kelly, a researcher at SlashNext, said, "Emails generated by WormGPT use professional business language and contain no spelling or grammatical errors, making phishing attempts difficult to detect."

[News Terms] The Malicious Version of ChatGPT, 'WormGPT' WormGPT Interface
Photo by Hacking forum

With hackers expected to actively utilize generative AI starting this year, the risk of cybercrime is also increasing. In particular, the emergence of WormGPT raises concerns that even beginners can easily commit crimes using generative AI, potentially leading to the 'popularization of hacking crimes.'


The Ministry of Science and ICT and the Korea Internet & Security Agency (KISA) warned in their '2025 Cyber Threat Outlook' report that cyber security threats exploiting generative AI models could increase further. Major cyber threats this year include ▲the active use of generative AI ▲increased cyber threats to digital convergence systems ▲potential rise in cyber threats due to changes in the international environment ▲and an expected increase in indiscriminate DDoS (Distributed Denial of Service) attacks.


The Ministry of Science and ICT noted that, in addition to ChatGPT, the use of domestic generative AI is expanding, and that malicious AI models specialized for cybercrime, such as WormGPT?which is inherently illegal?are being distributed through the dark web, predicting that cyber threats based on these models will also increase.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top