본문 바로가기
bar_progress

Text Size

Close

[Interview] "AI Lowering Attack and Defense Hurdles... Trust Boundaries Are Breaking Down"

Interview with Google DeepMind AI Cybersecurity Research Team Leader
"Solving Data Imbalance with AI... An Environment Favorable for Defense"

[Interview] "AI Lowering Attack and Defense Hurdles... Trust Boundaries Are Breaking Down" Ellie Burtstein, Head of AI Cybersecurity Technology Research Team at Google DeepMind, is being interviewed on the 19th at the Google Office in Gangnam-gu, Seoul. Photo by Jinhyung Kang aymsdream@

"The emergence of generative artificial intelligence (AI) inevitably blurs the judgment of what can be trusted. Attacks using deepfakes or deepvoices are not only a threat to security but can also cause the greatest social anxiety."


Elie Bursztein, team lead of AI cybersecurity technology research at Google DeepMind, Google's AI development company, recently diagnosed this in an interview with Asia Economy. Bursztein joined Google in 2011 and moved to Google DeepMind in 2023. He researches AI technologies that can be applied to cybersecurity.


Generative AI is being used as a weapon for cyberattacks. Representative examples include 'deepfakes' that create realistic images and videos, and 'deepvoices' that mimic voices. For instance, demanding money using a family member’s voice generated by AI or extracting confidential information through videos impersonating company executives. Although not entirely new technology, the significant change is that anyone can now use these technologies, Bursztein pointed out. He said, "Before generative AI, creating deepfakes required expertise in coding or software (SW). On the other hand, now it is easy to access technologies for creating malware or deepfakes by simply conversing with AI as if talking to another person."


The AI’s ability to perform even complex tasks instantly also poses a threat to security. In the past, AI had limited functions such as translation or recognizing specific images. In other words, the criteria for judging good models or restricting functions were clear. In contrast, AI, which has become an 'all-purpose assistant,' is not that simple. While it can paint like an artist, it should not create realistic nude photos. For security, standards must be embedded in AI models about what is allowed and what is not, but the boundary itself is ambiguous. As a result, so-called 'jailbroken' AI sometimes produces harmful outputs like malware. AI jailbreak refers to inputting specific commands or situations that cause AI to bypass ethical guidelines.


[Interview] "AI Lowering Attack and Defense Hurdles... Trust Boundaries Are Breaking Down"

At the same time, AI has also evolved as a 'shield' to block security threats. The core of security is to identify specific codes planted by attackers within vast amounts of data. The more data there is, the harder it is to detect these codes, but the story changes when AI is used. Bursztein explained, "The more data there is, the higher the accuracy of AI’s attack detection becomes," adding, "Instead, humans can focus on more complex security issues." While attackers must understand the entire system and find vulnerabilities, defenders can delegate vast information analysis to AI and enhance their defense capabilities. In this regard, he emphasized that AI provides a more favorable environment for defenders.


The important thing is how to utilize AI, which can be both a spear and a shield. From a security perspective, it is necessary to actively use AI while also having social discussions about the extent to which the outputs generated by AI models should be allowed. Bursztein said, "Scammers and hackers have always existed, and they always find new methods," concluding, "Ultimately, the key is how to make AI a human assistant."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top