본문 바로가기
bar_progress

Text Size

Close

[Deepfake Fear Targeting Individuals and Companies②] Fake CEO Gave Orders... Urgent Need for 'Real-Time' Detection Technology

Rising Threat of Crimes Involving Deepfake Impersonation in Video Calls
Need for Real-Time Deepfake Detection Technology Beyond Images and Videos

[Deepfake Fear Targeting Individuals and Companies②] Fake CEO Gave Orders... Urgent Need for 'Real-Time' Detection Technology

#Last October, many people who accessed a social media video disguised as a live broadcast of Tesla's RoboTaxi event were exposed to a deepfake scam. In the fake live broadcast video, a deepfake-manipulated fake Elon Musk, Tesla CEO, instructed viewers to pay cryptocurrency via a QR code to watch the live stream. Earlier, in April last year, a woman in Korea lost 70 million won after being scammed by someone impersonating Elon Musk who approached her via video call.


Until now, deepfake crimes were mainly perceived as being used for sexual crimes. Recently, it was revealed that the largest-ever cyber sexual exploitation group, the ‘Moksabang’ (Pastor Room), produced sexual exploitation materials using deepfakes. Such deepfake abuse crimes are now expanding beyond sexual offenses to impersonation of celebrities targeting individuals and companies for financial and information extortion.


Deepfake criminals mainly use social media to deceive individuals by impersonating famous CEOs and celebrities, inducing cryptocurrency transfers and fundraising for gambling expenses. This is possible through technologies such as ‘face swap,’ which changes the face of a person in a video using just one photo, and ‘lip-sync synthesis,’ which alters lip movements according to spoken content.


With the rapid advancement of generative AI technology, these techniques are becoming more sophisticated. While these technologies improve the efficiency of video content creation, there is an increasing risk of misuse to impersonate company CEOs or executives to steal confidential corporate information and other critical assets.


In particular, criminals are abusing real-time deepfake video production technology that transforms their own face into another person's face in real time during webcam conversations, going beyond the technique of synthesizing another person's face into existing videos. As this technology becomes more refined through AI, scenarios where criminals infiltrate personal video calls and corporate video conferences to commit fraud could become a reality.


Accordingly, the development of ‘real-time’ deepfake detection technology is urgently needed. In response, domestic security companies have begun developing such technology. IT security and authentication platform company RaonSecure has incorporated AI technology into its personal mobile antivirus app, ‘Raon Mobile Security,’ enabling detection of deepfake videos and images, and is now developing real-time deepfake detection technology. This technology can detect whether ongoing video calls or video conferences on smartphone screens are deepfakes.


RaonSecure plans to apply real-time deepfake detection technology not only to personal deepfake detection apps but also to provide it to enterprises. Individuals can prevent phishing through deepfake video calls impersonating acquaintances using this technology, while companies such as telecom providers can offer this technology to customers or protect their employees from CEO and executive impersonation in video calls and conferences, safeguarding valuable corporate information and preventing financial fraud.


As real-time deepfake technology using webcams and video calls advances, deepfake crimes impersonating acquaintances are also becoming more sophisticated. Fake CEOs could issue video call instructions causing corporate losses to employees, and fake employees could attend video conferences to steal confidential information. This makes the demand for real-time detection technology even more critical among individuals and companies.


In fact, the demand for technology to prevent deepfake crimes is increasing alongside the development of generative AI technology. Market research firm MarketsandMarkets forecasts that the global deepfake detection market will grow from $500 million (approximately 726.7 billion won) in 2022 to $1.8 billion (approximately 2.6163 trillion won) by 2027.


A security industry expert said, “Deepfake criminals are no longer just deceiving others with pre-made deepfake images or videos but are infiltrating individuals and companies with real-time video by changing their own faces to others’. Real-time deepfake detection technology will help protect personal and corporate assets and maintain industrial competitiveness. As deepfake technology becomes more sophisticated, continuous research and development of detection technology is essential.”


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top