본문 바로가기
bar_progress

Text Size

Close

Swift Deepfake, Started from Community Members' 'Reckless Challenge'

'Motivation Behind AI Image Maker Regulation Evasion'
"Victims Find It Difficult to Receive Legal Assistance"

Amid controversy over so-called 'pornographic deepfakes' (fake images created using AI technology) maliciously synthesized using the image of global pop star Taylor Swift, analysis has revealed that these deepfakes originated as a 'challenge' started by members of a harmful online community in the United States.


'Pornographic Deepfakes' Stem from U.S. A Community Sharing Racist and Sexist Content
Swift Deepfake, Started from Community Members' 'Reckless Challenge' Global pop star Taylor Swift. [Image source=AFP Yonhap News]

On the 5th (local time), The New York Times (NYT) reported that Graphika, a misinformation research company, tracked the Swift deepfakes within the harmful online community known as A Community and reached this preliminary conclusion. A Community is notorious for sharing hate speech, conspiracy theories, and racist and offensive content generated using artificial intelligence (AI). It was also identified as a distribution channel when a large volume of sensitive U.S. classified documents, including U.S. intelligence agency wiretapping activities targeting allies such as South Korea, were leaked online last year.


Graphika revealed that within this community, there were attempts to bypass safety measures set by image-generating AIs such as OpenAI's 'DALL·E', Microsoft's 'Microsoft Designer', and 'Bing Image Creator'. Users were instructed on the message boards to "share tips and tricks for finding new ways to bypass filters" and were encouraged with phrases like "good luck, be creative." Members of A Community engaged in a kind of 'game' or 'challenge' to test whether they could create pornographic images featuring famous women using AI. Notably, A Community has no rules prohibiting AI-generated explicit sexual images.


Driven by the 'Challenge Spirit' to Disable AI Image Generation Tools' Safety Features and Encouraged by User Praise
Swift Deepfake, Started from Community Members' 'Reckless Challenge'

The Swift deepfake first appeared on A Community on January 6. Those who bypassed regulations received praise. Subsequently, there were requests within the community to share the command language used for image generation. Eleven days later, the images appeared on Telegram, and from the following day, they were shared via X (formerly Twitter). To prevent the spread of these images, X blocked searches for 'Taylor Swift' or 'Taylor Swift AI' starting from the 27th of last month and deleted deepfake images. The name search restriction was lifted on the 29th.


Christina Lopez, senior researcher at Graphika, explained, "These images originated from a community motivated by the 'challenge' of bypassing AI product safety measures, where new regulations are seen as yet another obstacle to overcome," emphasizing, "Swift is not the only victim." This implies that there were deepfake images targeting numerous actors, singers, and politicians within A Community.


Foreign media noted, "Software-generated fake pornography has been a problem since at least 2017, shocking unwanted celebrities, government officials, Twitch streamers, students, and others," pointing out that "due to lax regulations, there are few victims who can seek legal help, and few have the strong fanbase like Swift to combat fake images."




© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top