본문 바로가기
bar_progress

Text Size

Close

[Deepfake Fear Targeting Individuals and Companies①] My Face Used in Crime?... Detection Technology Becomes a 'Must-Have'

From Sex Crimes Targeting Individuals and Celebrities to Financial Fraud by Impersonation, Deepfake Fears Spread
Global Race to Establish Systems and Develop Technology...
Need for Deepfake Detection in Everyday Life

[Deepfake Fear Targeting Individuals and Companies①] My Face Used in Crime?... Detection Technology Becomes a 'Must-Have' Awareness is rising about crimes that exploit deepfake technology.

#A woman in her 50s in France, Ms. A, fell in love and divorced her husband after dating a man who approached her on social media and messenger claiming to be Brad Pitt for two years. Believing he was Brad Pitt, she sent him 1.2 billion KRW, which he demanded as alimony for cancer treatment costs. When Ms. A discovered that he was a scammer impersonating Brad Pitt using deepfake images, she received treatment for depression.


#A gang that opened a YouTube gambling channel using deepfake videos synthesized with the faces of famous domestic celebrities, broadcast gambling shows, and recruited participants to illicitly gain 380 billion KRW was recently caught by the police. They stimulated viewers' interest by synthesizing their own faces with those of popular celebrities.


As generative AI technology advances, crimes abusing deepfake technology that creates videos or images using others' faces are increasingly being revealed, spreading deepfake fears. Not only government and corporate responses but also individuals themselves are becoming more aware of the importance of preparing for deepfake crimes in daily life.


Following the shocking revelation last year of a sex crime where a man from a prestigious university distributed videos synthesizing the faces of female alumni into pornographic content, deepfake crimes have expanded to include fraud impersonating others' faces. Deepfakes are being abused not only to humiliate individuals but also to cause financial and psychological harm to the public, causing both celebrities and ordinary people to live in fear that their faces might be manipulated by acquaintances and used in crimes targeting many people.


From January to November last year, nearly 1,000 reports of deepfake sex crimes were filed with the National Police Agency. Additionally, fraud using deepfakes frequently occurs in forms such as fake cryptocurrency projects or inducements to participate in gambling. In fact, the digital asset trading platform Bitget projected that deepfake cryptocurrency scam damages would more than double last year, reaching about 2.5 million USD.


To respond to the worsening deepfake crimes, the government is accelerating the enactment of related laws. At the recent ‘Major Issue Resolution Meeting’ chaired by Acting Prime Minister and Deputy Prime Minister Choi Sang-mok, the Ministry of Science and ICT announced that it would complete the subordinate legislation of the AI Basic Act, which includes deepfake risk management, early in the first half of the year. Also, the Personal Information Protection Commission’s 2025 work plan includes introducing victims’ rights to request deletion of synthetic content created using deepfakes.


Globally, governments and companies are actively preparing countermeasures by enacting related laws and investing in the development of deepfake detection technologies. At the end of last year, the U.S. Department of Defense decided to invest 2.4 million USD over two years in startups developing deepfake detection technology. Earlier, big tech companies such as Amazon, Google, Meta, and Microsoft announced a joint statement at the Munich Security Conference in Germany last year aimed at blocking the side effects of deepfakes.


Since anyone?from politicians and celebrities to ordinary individuals?can be targeted, the importance of preparedness in daily life is emphasized. In line with this trend, domestic security companies have also begun efforts to eradicate crimes using deepfake detection technology.


Recently, IT security and authentication platform company RaonSecure officially commercialized a function that detects deepfake videos or images through AI technology in its personal mobile vaccine app, ‘Raon Mobile Security.’


Through this app, anyone can easily upload videos or images stored on their smartphone or post online video links, and the AI shows the probability of the content being deepfake within seconds. RaonSecure plans to continue advancing its deepfake detection technology. According to RaonSecure, their technology can help eradicate crimes by distinguishing defamation, fraud, and fake news targeting celebrities and ordinary people using deepfakes.


Experts agree that continuous research and development of such detection technologies are essential. Since deepfakes are AI technologies that constantly learn and evolve, as detection technology advances, attack techniques also become more sophisticated, leading to an ongoing battle between offense and defense.


A security industry expert said, “Because deepfake crimes can target anyone at any time, deepfake detection technology must become something everyone easily encounters in daily life. Continuous R&D is necessary to ensure detection technology keeps pace with the rapid development of deepfake technology. As deepfake creations become more sophisticated, the government, companies, and individuals must all remain vigilant and use systems, technology, and civic awareness to eradicate crimes and protect themselves and society.”


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top