본문 바로가기
bar_progress

Text Size

Close

[AI in the General Election]② False Information, an AI-Generated Tsunami Approaches

Deepfake Video Use in Election Campaigns Banned
AI Technology Advances and Usage Increase... Divergent Views on Regulation Scope
AI and Macro Combined, Comment Manipulation Programs Emerge
EU Imposes 6% Global Revenue Fine for Non-Compliance

"Ahead of Taiwan's presidential election on the 13th, a wave of misinformation?from deepfakes to rapid TikTok videos?is hitting Taiwanese voters." (AFP)


With global attention focused on Taiwan's presidential election, AFP reported on the 10th that voters are facing a flood of pro-China misinformation ahead of the vote. The misinformation has been concentrated on Lai Ching-te, the pro-independence Democratic Progressive Party (DPP) candidate who opposes claims that Taiwan is part of China. China has been identified as the source behind this. China dismissed the misinformation campaign as mere rumors. The world is closely watching where voters' sentiments will head amid this flood of misinformation.


[AI in the General Election]② False Information, an AI-Generated Tsunami Approaches
The April 10 General Election Marks the Dawn of AI Elections, Potential for Unexpected Variables

The recent situation in Taiwan, where various misinformation including deepfakes and fake news are rampant on online platforms causing voter confusion, is a microcosm of the risks posed by "Artificial Intelligence (AI) elections." This is the background for forecasts that AI could act as an unexpected variable in South Korea’s general election just three months away on April 10. Professor Cho Won-yong of the Central Election Management Committee’s Election Training Institute warned, "With over 1,000 candidates in the general election, it is the dawn of AI elections. The 2027 presidential election, where a few candidates fiercely compete, will be a fierce AI war." The Election Commission analyzed that deepfakes will most strongly impact candidates' personal traits, abilities, and morality, while the influence on policies and pledges will vary by election.


[AI in the General Election]② False Information, an AI-Generated Tsunami Approaches

In South Korea, an amendment to the Public Official Election Act prohibiting election campaigning using deepfakes starting 90 days before elections passed the National Assembly plenary session last month. Accordingly, from the 29th, election campaigning using promotional videos created with deepfakes will be completely banned. However, the AI debate surrounding elections is only just beginning. As AI technology advances and its utilization increases, experts have differing views on how far deepfake regulations should go. Some argue for strong regulation because it threatens fair elections and can be used as a psychological warfare tool, while others warn that excessive regulation could restrict freedom of the press, expression, and business.


Macro Programs Combined with ChatGPT-Generated AI Also Manipulate Comments

It’s not just fake news and deepfakes. Recently, macro programs that automatically repeat commands have been combined with generative AI like ChatGPT to create sophisticated fake comment posting programs. The problem arises when such manipulated misinformation meets public elections. If large-scale comment manipulation is carried out through comment generation programs, there is ample potential for misuse in swaying public opinion. Currently, a search on internet portal sites easily reveals posts advertising AI-powered services such as "solving YouTube and speech writing as well as efficient election campaigning all at once" and "selling programs that automatically post comments like a human." Professor Cho Won-yong stated, "AI technology is advancing daily and new businesses utilizing it are emerging. Detection and identification technologies are also developing but have limits in keeping pace. For these reasons, regulation of deepfake technology use in public elections is necessary."


[AI in the General Election]② False Information, an AI-Generated Tsunami Approaches

With the advancement of AI technology, big tech companies are also accelerating their efforts. Recently, NAVER established the 'Future AI Center,' an organization directly under CEO Choi Soo-yeon dedicated to AI safety research. This is to keep pace with active international efforts to enhance AI safety and reliability. This organization conducts research to improve AI safety technologies and establishes AI ethics policies. With about 100 personnel involved, it plans to advance technology research while developing services and managing risks for NAVER’s large language model (LLM), HyperCLOVA X. NAVER Director Kwak Dae-hyun said, "This organization is not only for responding to the April general election but also for overall AI safety enhancement. We plan to present specific directions and future plans by the end of this month."


Kakao is strengthening research on technology to filter harmful content ahead of the general election. As part of AI content health research, it is conducting studies to detect misinformation and deepfakes, and separately operates a team to technically respond to AI abuse. Particularly, in the area of generative AI images, Kakao Brain is reviewing invisible watermark technology for its image generation model 'Karlo.' Invisible watermarks are not visible to users but technically allow identification of whether an image was generated by 'Karlo.' Google is also researching and operating this technology.


EU Imposes 6% Global Revenue Fines for Non-Compliance with Online Platform Regulations

Some voices call for stronger regulation of online platforms amid serious concerns about the side effects of AI technology misuse. According to the Election Commission, a 2020 voter awareness survey on the 21st National Assembly election showed that 43.4% of respondents obtained necessary candidate information via internet portals, websites, etc., nearly half. This indicates that portals, websites, and social media are the areas most vulnerable to misinformation exploitation.


[AI in the General Election]② False Information, an AI-Generated Tsunami Approaches

Currently, the European Union (EU) has the 'Digital Services Act,' which requires large online platforms and search engines to identify, analyze, and assess structural risks related to illegal information dissemination, fundamental rights violations, civic discourse, elections, and public safety within the EU at least once a year. It also mandates clear identification measures for manipulated false information and imposes strong regulations including fines up to 6% of global revenue on large online platforms and search engines that fail to comply.


Choi Jin-eung, a legislative researcher at the National Assembly Research Service, emphasized, "South Korea also needs to clearly define the concept and regulatory direction of 'election campaigning using deepfakes' and establish clear standards for lawful election campaigning based on AI technology." He added, "Since strict legal regulations could suppress free political expression or public-interest reporting by the press, it is necessary to set specific categories for regulatory targets."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top