본문 바로가기
bar_progress

Text Size

Close

"Deepfake Pornography, Detecting Manipulation" IT Industry Initiates Technology Research

"Deepfake Pornography, Detecting Manipulation" IT Industry Initiates Technology Research


[Asia Economy Reporter Bu Aeri] Following the recent Telegram Nth Room incident, the police have launched an investigation after detecting signs that deepfake pornographic materials involving about 100 Korean celebrities were being systematically distributed. As cases of deepfake technology abuse continue to emerge, domestic IT companies have rolled up their sleeves to develop technology capable of detecting manipulation.


Deepfake is a compound word combining 'deep learning,' an artificial intelligence (AI) technology where computers learn autonomously, and 'fake,' meaning counterfeit. It is a technology in which AI analyzes and learns from input data to create fake images that are difficult to distinguish from real ones.


◆ From 'Jiin Neungyok' to Celebrity Synthesis = According to industry sources on the 5th, illegal deepfake pornographic videos and photos with content such as 'Jiin Neungyok' are still circulating on social networking services (SNS).


Spaces where deepfake sexual crimes occur include Telegram, Twitter, Instagram, and others. Searching for 'Jiin Neungyok' on SNS easily reveals posts requesting or offering to create such content. All women become targets. Men provide photos of female friends, family members, or acquaintances and request synthesis into pornographic materials.


Deepfake sexual crimes are widespread not only in Korea but globally. According to 'Deeptrace,' a Dutch cybersecurity research company, there are approximately 15,000 deepfake videos worldwide. Among these, 96% are pornographic. Notably, 25% of the victims of pornographic synthesis are Korean female celebrities.


◆ Developing Ethical Deepfake Technology = As the problem worsened, domestic AI-related IT companies began research and development of deepfake detection technology to determine whether videos have been manipulated.


AI company 'Moneybrain' is focusing on deepfake detection technology research based on its AI-related technological capabilities and personnel. Moneybrain is concentrating on creating a deep learning model called ‘AI Fake Finder (AI Video Manipulation Detection Technology)’ that can judge the authenticity of videos. They are conducting ongoing iterative training through various base models to enable the system to independently distinguish between real and fake.


Additionally, 'RnDeep' recently completed the development of 'Red AI' technology to block online sexual violence and harmful pornographic sites. Based on this, RnDeep is developing deepfake detection technology and profanity prevention technology used illegally. The 'Red AI Mosaic' technology recently unveiled by RnDeep inspects every frame of input photos and videos and automatically generates mosaics over key exposed areas. If problematic content is detected, the entire screen can be blacked out, or only specific parts can be blurred.


Jang Se-young, CEO of Moneybrain, said, "Deepfake technology has very high growth potential and utility, but as misuse methods become increasingly sophisticated, the number of victims is rapidly increasing, which is very regrettable. We will strive to present practical solutions so that deepfake technology can be used beneficially for industrial development based on the technological capabilities we possess."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top