AI Competition Creates 'Realistic Fakes'
Used in Digital Sex Crimes, Causing Social Issues
Generative Adversarial Networks (GAN) are a technology that reduces errors by having two artificial intelligences (AI) compete against each other to create realistic fake images, videos, audio, and more. GAN was first introduced in 2014 by American computer scientist Ian Goodfellow at the Neural Information Processing Systems Conference (NIPS). GAN is a branch of deep learning and is primarily used in deepfake technology.
GAN consists of two networks (artificial neural networks) called the Generator and the Discriminator. The Generator and Discriminator have an adversarial relationship. The Generator’s role is to create data similar to real data. On the other hand, the Discriminator’s role is to distinguish whether the given data is real or fake. The Generator tries to deceive the Discriminator, while the Discriminator tries to identify the fake data created by the Generator.
Ian Goodfellow explained this process using the example of a “police officer and a counterfeit money forger.” The forger’s goal is to produce counterfeit money that looks as real as possible to deceive the police. The police’s goal is to identify counterfeit money and catch the forger. As this situation repeats, the forger eventually creates counterfeit money that looks very real, and the police find it difficult to distinguish between real and fake. In this way, the two artificial neural networks improve each other’s performance through an adversarial and iterative learning process.
Various cases applying GAN have emerged. The American big tech company NVIDIA released “images of people who do not exist” in 2017. NVIDIA proposed a new training method that gradually trains the Generator and Discriminator from low values to higher ones, progressively improving them. The images generated through this method reached a level where it is difficult to distinguish whether the person actually exists. In the same year, a fake speech video of former U.S. President Barack Obama, created by researchers at the University of Washington, also attracted attention. This video was synthesized by taking the voice from Obama’s speeches and creating lip movements that matched the audio.
Unlike positive cases such as AI announcers and AI presidential candidates, deepfake technology has become a social issue as it has been used in digital sex crimes. Deepfake videos combining the faces of not only domestic celebrities but also ordinary people with pornographic content have been indiscriminately distributed. Telegram chat rooms known as “Jiin Neungyok Bang” and “Gyeop Jiin Bang,” where these videos are shared, have become widespread, increasing concerns among affected women.
Countries around the world are implementing government-level measures. In October last year, the United States issued an executive order requiring mandatory watermarks (identification marks) on AI-generated videos, photos, and audio. South Korea decided on the 30th of last month to push for legal amendments to raise punishment standards for the production and distribution of deepfake videos. Technologies to track and detect deepfakes have also emerged. In 2022, Intel unveiled “FakeCatcher,” a technology that analyzes facial blood vessels to detect fake videos with 96% accuracy. Additionally, last year, a research team at the Massachusetts Institute of Technology (MIT) announced the development of “PhotoGuard,” a technology that distorts the central pixels of an image to prevent AI from unauthorized editing of images.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[News Terms] Deepfake-Based Technology 'Generative Adversarial Network (GAN)'](https://cphoto.asiae.co.kr/listimglink/1/2024090309370215780_1725323822.png)
![[News Terms] Deepfake-Based Technology 'Generative Adversarial Network (GAN)'](https://cphoto.asiae.co.kr/listimglink/1/2024090309382815784_1725323908.png)

