Possibility of Producing and Distributing Nude Photos Without Consent of the Parties Involved
In the United States, the number of users of deepfake applications (apps) and websites that use artificial intelligence (AI) to digitally undress women in photos is surging.
24 Million Visitors to Deepfake Websites in September... Ads Encouraging Sexual Harassment Also Found
On the 10th (local time), Bloomberg News cited software-as-a-service company Graphika, reporting that 24 million people visited deepfake websites using AI to undress people in photos in just the month of September.
Deepfake is a portmanteau of deep learning and fake, referring to AI-based manipulated images or videos that realistically alter faces and other features.
According to Graphika, the number of links advertising AI undressing apps on social media platforms such as X (formerly Twitter) and Reddit increased by 2400% in September compared to earlier this year.
Graphika analyzed that "the popularity of these apps and websites is due to AI advancements enabling the creation of much more 'believable images' than just a few years ago."
Deepfake apps and websites use AI to create images that make it appear as if the person in the photo is undressing. Most of the subjects in these photos are women. It is also reported that many apps are designed to only transform images of women.
Concerns are growing that the popularity of deepfake apps and websites could lead to criminal misuse. Most of the time, explicit content such as nude photos is created and distributed without the consent or awareness of the individuals involved.
In fact, one advertisement posted on X encouraged sexual harassment by stating that users could create nude images of others using AI and then send them back to the person.
Another related app paid for advertising on YouTube to appear first when searching for the word "nudify."
"Ordinary People Targeting Ordinary People"... Google, Reddit, TikTok Say They Are "Cracking Down"
Former U.S. President Barack Obama created with deepfake technology. [Image source=University of Washington]
Privacy experts expressed concern that deepfake software is becoming easier and more effective to use due to advances in AI technology.
Eva Galperin, Director of Cybersecurity at the digital rights advocacy group Electronic Frontier Foundation, emphasized, "There is an increasing number of cases where ordinary people target other ordinary people with this behavior," adding, "This is happening among high school and college students."
Google responded to the issue by stating, "We do not allow explicit sexual content," and added, "We have reviewed the problematic ads and are removing those that violate our policies."
A Reddit spokesperson also said, "The non-consensual sharing of fake explicit content is prohibited, and as a result of investigations, many domains have been blocked."
TikTok also stated that it is blocking keywords such as "undress."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


