Blocking Misinformation Spread by Deepfakes
Developing Content Authentication Technologies Is Key
Education on Identifying Fake News Is Also Essential
In February of this year, Democratic primary voters in New Hampshire received automated phone calls containing messages urging them "not to vote," voiced in the likeness of U.S. President Joe Biden. This incident is one of the recent examples of deepfake technology. Deepfake technology utilizes artificial intelligence (AI) and deep learning algorithms to create videos or audio that are nearly indistinguishable from reality. While technically innovative, it harbors significant ethical issues and social risks.
One of the main problems with deepfake technology is the spread of misinformation and fake news, as seen in the aforementioned case. The creation of fake videos or audio of celebrities or public figures to disseminate false information is increasing, which can cause social confusion. This can also apply to ordinary people and has a high potential for misuse in various cybercrimes and financial frauds such as voice phishing. As such cases increase, the overall trust in society can be severely damaged, ultimately weakening social cohesion and threatening the stability of communities.
Although deepfake technology has positive and creative potential uses, considering the severity of its problems, I believe it is not too late to have serious discussions after regulatory and management systems for technology abuse are stabilized. I see this approach as a way to guide the healthy development of the technology.
Regulation and legal responses to deepfakes are still insufficient. Rather than simply trying to exclude such technology, comprehensive countermeasures that minimize negative impacts are necessary. First, the development of content authentication technologies and AI-based detection systems is important. For example, Project Origin is a collaboration involving The New York Times (NYT) and Microsoft (MS), which is advancing technology that allows media creators to verify the source of content through digital fingerprints or watermarks when publishing, and to alert consumers if any alterations are detected.
Moreover, strict regulations such as the DEEPFAKES Accountability Act are also needed. This law prohibits illegal acts using deepfakes and proposes measures to strengthen responses to such acts. However, alongside these efforts, further development and supplementation appear necessary.
Education and awareness raising are also crucial. It is essential to provide education that helps people understand the potential threats of deepfake technology and develop the ability to recognize fake content and judge its authenticity. Supporting the public to make the right choices between technological advancement and moral responsibility is important. If the development of deepfake technology cannot be stopped or avoided, efforts to seek clear directions on how to manage and utilize it are urgently needed.
Yunseok Son, Professor at University of Notre Dame, USA
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
