With the rapid advancement of generative AI, the adoption of chatbots by companies is accelerating. However, concerns about the risks inherent in generative AI, such as 'hallucination' and security risks, are also increasing. Hallucination refers to the phenomenon where AI generates content that is not present in the training data as if it were factual.
A recent case where Air Canada’s chatbot provided passengers with incorrect information, causing confusion, clearly illustrates the dangers of hallucination. By giving false guidance regarding airline ticket refund policies, it not only caused customer dissatisfaction but also posed a serious issue that could damage the company’s brand image.
Hallucination can lead to various legal risks such as misinformation, defamation, and copyright infringement, requiring careful attention. Additionally, security risks such as the leakage of personal information or corporate secrets obtained during conversations are challenges that cannot be overlooked.
According to the latest report by McKinsey, 45% of companies identified risk management of generative AI as their top priority. This underscores the growing importance of safe and responsible AI utilization. To achieve this, it is essential first to accurately understand the causes of hallucination and security risks and then establish systematic response strategies.
- Fact or Falsehood? The Risks of Hallucination and Strategies to Overcome Them
Various technical approaches are being attempted to solve the hallucination problem. Representative methods include self-model training, fine-tuning, and prompt engineering. However, the most notable recent technology is 'RAG (Retrieval Augmented Generation).' RAG is a technology that generates accurate and consistent responses by utilizing vast external knowledge. It searches for relevant information in real-time based on queries and derives optimal answers accordingly.
The advantage of RAG is that it overcomes the limitations of training data and can utilize the latest information. It also enhances the reliability of responses by specifying the sources of information. However, since RAG heavily depends on search results, the accuracy of answers may decline if the search quality is poor. Slower response times due to extensive database searches and increased API call costs are also major drawbacks. Furthermore, it is difficult to completely exclude issues related to the quality and bias of external knowledge.
Makebot Co., Ltd., a specialized AI chatbot company, is developing a dedicated solution to overcome the limitations of RAG technology. By using a self-developed knowledge graph and search algorithms, it improves accuracy while advancing the RAG model through numerous patents and research. Makebot’s solution has attracted attention by demonstrating high performance in specialized fields such as finance and healthcare.
- The First Step to Protecting Data: Core Strategies for Managing Security Risks
Establishing a data governance system is urgent for managing security risks. Technical security measures such as personal information anonymization, data access control, and encryption must be accompanied by administrative measures like employee training and monitoring. Furthermore, establishing AI ethics guidelines and applying them throughout the development and operation processes is also important. IBM’s 'AI Ethics Board' plays a role in evaluating and guiding the ethics of AI projects.
Legal experts also point out issues of legal liability related to hallucination and security risks. The likelihood of disputes over defamation, privacy infringement, and intellectual property rights violations caused by generative AI is increasing. Accordingly, legal risk management measures such as revising service terms and conditions, disclaimers, and obtaining insurance must be prepared.
Kim Ji-woong, CEO of Makebot Co., Ltd., emphasized, "To solve the hallucination problem, understanding the latest technologies such as RAG and developing specialized solutions to complement them are essential." He added, "Makebot is dedicated to researching various models and algorithms to overcome hallucination and is leading the industry through related patent applications and technological advancements."
Hallucination and security risks are challenges that companies facing the generative AI era must solve. It is a time when innovation 'beyond technology' is required to fulfill social responsibility without complacency in technological progress. Sustainable growth based on safety and reliability is expected to become the key task that determines corporate competitiveness in the generative AI era.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


