Widespread Use of AI-Based Synthesis Technology 'Deepfake'
Anyone Can Easily Create Synthetic Videos...Concerns Over Misuse
'Deepfake Detection' AI Emerges to Counter Deepfake AI
AI vs AI Battle...Could Become a 'Spear and Shield' Duel
A video created using 'deepfake' technology that merges the face of the actor who plays 'Mr. Bean' onto the body of former U.S. President Donald Trump. / Photo by YouTube capture
[Asia Economy Reporter Lim Juhyung] Recently, on YouTube, videos titled so-called 'deepfake mashups' can be easily found. This is a newly coined term combining deepfake technology, which uses artificial intelligence (AI) to overlay parts of a person's body onto another video, and the mashup genre, which mixes multiple songs to create new music.
Deepfake mashups are mainly produced by overlaying another actor's face onto famous scenes from movies, dramas, or news. For example, replacing the actor's face in the movie "Iron Man" with Tom Hanks, or inserting the face of British actor Rowan Atkinson, famous for "Mr. Bean," into an interview scene of former U.S. President Donald Trump.
Although there is still variation depending on the skill of the video technician, as AI develops, deepfakes are becoming so sophisticated that they are increasingly difficult to distinguish with the naked eye. This is why there are concerns that deepfakes could be misused for crimes such as fake news and identity theft in the near future.
Deepfakes are already widely used throughout our society. For instance, the Chinese video-sharing social networking service (SNS) "TikTok," which has over 800 million users across more than 150 countries worldwide, supports deepfake technology. Within TikTok, this feature is called "face swap," allowing users to film their own face and overlay it onto another person's body.
Not only on SNS but also in the global advertising industry, deepfake technology is utilized. In April 2019, British footballer David Beckham filmed a public service advertisement for malaria eradication. Although Beckham spoke in English, by manipulating his lip movements using deepfake technology, the advertisement was produced in a total of nine languages.
A scene of English footballer David Beckham's face partially manipulated using deepfake technology to create dubbed videos in nine languages. / Photo by 'Malaria Must Die' YouTube capture
The problem is that as deepfakes become more common, it is becoming harder to distinguish them from real content. If deepfakes are used to spread fake news or steal others' personal information, it could lead to social chaos.
Deepfakes have already emerged as a social issue. During the "Nth Room" case, where perpetrators gathered on the messenger app "Telegram" to produce and distribute sexual exploitation videos for money, some suspects used this technology to synthesize women's photos into other obscene materials, committing so-called "acquaintance humiliation" crimes.
Similar incidents have occurred in other countries. On "Pornhub," the largest adult video website in the United States, so-called "deepfake pornography," which synthesizes celebrities' or ordinary people's faces onto pornographic videos, began to appear last year.
Recognizing this, Pornhub announced in February, "Deepfakes are no different from revenge porn," and declared that all AI-synthesized pornographic videos would be deleted and banned from the website.
According to a report released on June 3 by Dutch cybersecurity research company "Deeptrace," the number of deepfake videos online nearly doubled from 7,964 in 2018 to 14,678 in July last year. Among these, 96% (14,090) were deepfake pornographic videos.
So, is it impossible to prevent the misuse of deepfake technology?
Currently, the core technology behind deepfakes is GAN (Generative Adversarial Network). GAN works by having AI create two models: a "generator" and a "discriminator." The discriminator repeatedly identifies and destroys fakes produced by the generator. Through this process, unnatural parts in the generator's output gradually decrease, eventually producing the most sophisticated model.
Simply put, AI looks at data and cross-validates itself to create increasingly "realistic fakes." Previously, creating sophisticated composite videos or photos required human skill, but now, by inputting simple commands into AI software, it is possible to produce realistic composites.
Moreover, as AI's synthesis proficiency continues to improve day by day, it will soon become impossible for the human eye to fully distinguish between deepfake and real videos.
Microsoft unveiled 'Ascenticator' on the 1st (local time). It detects parts of deepfake editing on the screen to determine the authenticity of the video. / Photo by Microsoft
However, just as deepfakes are becoming widespread, 'deepfake detection' technology to block deepfakes is also gradually advancing. On the 1st, the U.S. IT giant Microsoft announced "Video Authenticator," a tool that determines whether a video is a deepfake.
Authenticator is also an AI-based technology like deepfakes. It analyzes specific videos frame by frame, detects traces of editing done using deepfake technology, and calculates a "confidence score." Based on this score, the video is "certified," allowing differentiation between deepfake and real videos.
In addition, internet companies such as Google and Facebook are also developing technologies to detect and identify deepfakes.
Both deepfake creation and deepfake detection use deep learning AI. The difference is that deepfake AI is trained to create more sophisticated fake videos, while deepfake detection AI is trained to identify edited parts of such fake videos.
In a way, AI and AI have become spear and shield, engaging in an endless competition.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

