본문 바로가기
bar_progress

Text Size

Close

"Generative AI, a Double-Edged Sword for Biometric Authentication Technology... Security Must Be Enhanced with Two-Factor Authentication"

NIA Report "Multimodal and Multi-Factor Authentication Essential"
Generative AI Improves Biometric Authentication Accuracy
But Risks of Misuse Like Deepfakes Exist

A report has emerged stating that biometric security solutions require a two-factor authentication system to counter deepfake threats.


"Generative AI, a Double-Edged Sword for Biometric Authentication Technology... Security Must Be Enhanced with Two-Factor Authentication" AI image generated with the keyword 'biometric authentication solution' / Photo by ChatGPT

On the 29th, the National Information Society Agency (NIA) of Korea stated in its report titled "Opportunities and Challenges of Biometric Technology in the Generative AI Era" that security must be enhanced through "multimodal biometric authentication" and "multi-factor authentication (utilizing various categories of authentication methods such as passwords, access cards, and biometrics)" to combat deepfake threats.


Multimodal biometric authentication refers to an authentication method that uses two or more biometric data types, combining physical characteristics (fingerprints, iris, face, veins, voice, etc.) or behavioral traits (gait, handwriting, typing habits, etc.). Multi-factor authentication means an authentication method that uses two or more factors among knowledge-based (passwords, security questions, etc.), possession-based (access cards, OTP, smartphones, etc.), and inherent factors (biometric indicators).


The reason NIA emphasized dual security is that generative AI acts as a double-edged sword for biometric technology. Deep learning algorithms can improve the accuracy and efficiency of biometric technology by learning and analyzing vast amounts of data, but at the same time, they may expose security vulnerabilities. It is difficult to identify meticulously crafted fake images, behaviors, or voices with a single authentication method.


Earlier this year, an incident drew attention when automated recorded calls containing deepfake audio urging voters not to vote for President Joe Biden were sent ahead of the New Hampshire presidential primary in the United States.


NIA stated, "Deepfakes use generative AI to manipulate a person's face or voice more easily and simply, being used maliciously for fraud, misinformation, privacy invasion, and other purposes, thus acting as a factor threatening the security of biometric systems," and emphasized, "Security solutions must continuously evolve to cope with threats like deepfakes, and additional defenses such as deepfake detection technology are necessary."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top