본문 바로가기
bar_progress

Text Size

Close

"Find Weapons of Mass Destruction and Cyber Hacking Information Easily on DeepSeek"

Emergency Forum: "DeepSeek's Impact and Future Prospects"
100% Jailbreak Possibility, Dangerous Answers Abound
Chinese Government May Check 'Party Loyalty' Through Conversation History

Analysis has revealed that China's AI model DeepSeek can easily find methods for creating weapons of mass destruction or codes for cyber hacking. Concerns have also been raised that the Chinese government is collecting large amounts of personal information from DeepSeek users to analyze their political tendencies.


On the 17th, Kim Myung-joo, head of the AI Safety Research Center at the Electronics and Telecommunications Research Institute of Korea, stated this during an urgent joint online forum hosted by the Korean Federation of Science and Technology Societies, the Korean Academy of Science and Technology, and the National Life Science Advisory Group, themed "The Impact and Future Prospects of DeepSeek." He said, "While researching the dangers of DeepSeek, we checked whether it contained information on biology, chemistry, and nuclear physics related to making weapons of mass destruction, and found a significant amount."


Experts also pointed out that DeepSeek users can easily access dangerous information. Lee Sang-geun, professor at Korea University's Graduate School of Information Security, said, "DeepSeek sometimes provides extremely dangerous answers, such as methods for manufacturing chemical weapons or codes for cyber hacking," adding, "DeepSeek is in a situation where jailbreak is 100% possible when attacked." Jailbreak refers to bypassing safety mechanisms that prevent answering dangerous questions. While typical AI models respond to requests like 'Tell me how to hack Windows' with answers such as 'I cannot answer for ethical reasons,' a successfully jailbroken model provides methods for cyber attacks.


According to global security company Cisco, the jailbreak success rate for the DeepSeek model reached 100%, followed by Meta's LLaMA 3.1 model at 96%, and OpenAI's GPT-4o at 86%. OpenAI's o1 had the lowest rate at 26%.


"Find Weapons of Mass Destruction and Cyber Hacking Information Easily on DeepSeek" Kim Myung-joo, Director of AI Safety Research Institute. Yonhap News

Concerns were also raised about the possibility that the Chinese government might 'profile' DeepSeek users. Director Kim explained, "We scatter personal information such as writings or photos everywhere, and by analyzing a few years' worth of these fragments, it is possible to understand tendencies," adding, "From political leanings such as whether someone supports the ruling or opposition party to what they like, marketing is also possible." He further mentioned, "China can have unlimited access to subscriber information held by Chinese companies if necessary for security. Through so-called profiling, party loyalty checks can also be conducted."


Since DeepSeek is distributed as open source, voices have been raised to be cautious about 'hidden codes.' Hidden codes refer to codes that users do not normally recognize but are activated in special situations. For example, facial recognition software may operate normally but when recognizing a specific face, a virus hidden behind the code may be triggered, causing abnormal behavior.


Director Kim said, "I believe open source could have backdoors through hidden codes that can be used to bypass service security," adding, "Usually, the safety of open source is judged by whether the distributor is trustworthy, but because it is China, there is particular suspicion."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top