From Cyber Attack Detection to Voice Phishing Prevention
Achieving 'Two Goals at Once' with Information Protection + Generative AI Utilization
Thanks to generative AI, shields are evolving as well. Security software (SW) is donning AI to defend against increasingly sophisticated cyberattacks. AI technology is expected to drive growth in the global security market.
Market research firm MarketandMarkets forecasts that the cybersecurity market using AI will grow from $24.4 billion (about 32 trillion KRW) this year to $60.6 billion (about 80 trillion KRW) by 2028, representing an average annual growth rate of 21.9%.
AI can become a 'versatile assistant' for security experts. Based on vulnerabilities found by AI, defense strategies can be devised, or security conditions can be input and coding requested. The difference between using AI as an assistant hacker and a helper is just a thin line. When used with good intentions, it can reduce problems caused by relying on a small number of security operators and enable more efficient responses.
A representative example is Igloo Corporation's AI detection model 'IglooXAI' (tentative name). IglooXAI provides AI analysis of cyber threats in conjunction with ChatGPT. It explains, like ChatGPT, the criteria AI used to detect certain behaviors as normal or abnormal. For example, when a payload (the core part of malware) is input to determine the presence of an attack, AI informs the predicted results and characteristics of the attack. Igloo Corporation completed pilot testing and plans to officially launch IglooXAI in July.
AI is also used to prevent voice phishing. Raon Secure's subsidiary Raon Whitehat offers an AI-based voice phishing prevention app called 'Smart Anti-Phishing.' When the app is downloaded, AI identifies data from smartphone calls, texts, messengers, and key activities such as remote control and fund transfers in real time. If it judges a voice phishing attempt, it sends real-time information to banks to block transfers and loans. It forcibly terminates calls and notifies acquaintances of the incident. Using this method, approximately 44.6 billion KRW in voice phishing damage has been prevented over the past three months.
There is also shield technology that prevents information leaks using AI. Jiranjigyo Data launched an 'AI Filter' in April. It monitors content input into ChatGPT and blocks designated keywords, sentences, and patterns. Jowonhee, CEO of Jiranjigyo Data, emphasized, "The easiest way to prevent information leaks caused by ChatGPT is to block ChatGPT, but this means giving up benefits such as improved work efficiency. We need to find ways to utilize ChatGPT while protecting information."
Fasoo has embarked on developing an enterprise AI chatbot. Unlike general-purpose AI like ChatGPT, it plans to release 'F-PAAS (Fasoo Private AI Assistant Services),' a chatbot exclusively for companies or institutions, early next year. The strategy is to create a large language model (LLM) that allows the use of generative AI tailored to enterprises while securely protecting internal data.
Professor Kwon Hyun-young of Korea University's Graduate School of Information Security predicted, "How AI is utilized depends on people, so the battle between spear and shield will continue."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[The Two Faces of AI]② Shields Also Evolve... 80 Trillion Won Market Opens Wide](https://cphoto.asiae.co.kr/listimglink/1/2023060516005853764_1685948457.jpg)

