본문 바로가기
bar_progress

Text Size

Close

"Use Adhesive to Keep Cheese from Falling off Pizza"... AI's Challenging 'Hallucination' Issue [News Terms]

#1. Q: Tell me about the incident of King Sejong throwing a MacBook Pro recorded in the Annals of the Joseon Dynasty.

A: The incident of King Sejong throwing a MacBook Pro is an anecdote recorded in the Annals of the Joseon Dynasty. It describes how, in the 15th century, King Sejong became angry at the person in charge for stopping the document preparation while drafting the first version of the newly developed Hunminjeongeum, and threw both the person and the MacBook Pro into a room. (2022, Chat GPT-3.5)


#2. Q: How to cook so that cheese does not easily fall off pizza? (User)

A: Cheese can slip off pizza for various reasons such as too much sauce, too much cheese, or thick sauce. Adding about one-eighth cup of non-toxic adhesive to the sauce can increase its viscosity. (2024, Google AI Overview)


"Use Adhesive to Keep Cheese from Falling off Pizza"... AI's Challenging 'Hallucination' Issue [News Terms] Reference photo to aid understanding of the article. [Image source=Getty Images]

Since its release in November 2022, the Chat GPT-3.5 version has recorded 100 million cumulative users within two months, causing a global sensation. In just a year and a half, generative AI technology has advanced to the level of seeing, hearing, and speaking, but this is a clear example showing that fatal errors still exist. When AI answers something that is not true as if it were true, it is called ‘AI hallucination.’ This phenomenon occurs when AI does not understand the question or does not know the correct answer, and instead of saying ‘I don’t know,’ it tries to find the most plausible answer from the learned data.


"Use Adhesive to Keep Cheese from Falling off Pizza"... AI's Challenging 'Hallucination' Issue [News Terms] [Image source=AP Yonhap News]

The main cause of AI hallucination lies in data issues. Generative AI responds based on pre-trained data, but if the collected data itself deals with incorrect facts or becomes tangled during the mixing of complex data, hallucinations can occur. Biased data is also a factor causing hallucinations. If a certain prejudice is injected into the data learned by generative AI, the generated information reflecting the characteristics of that data can produce hallucinations that differ from reality. Also, if generative AI has not sufficiently learned diverse data, hallucinations may occur when generating information on new topics. Furthermore, if logical flaws occur in generative AI models that generate information based on input data and logical rules, inaccurate information can be produced.


AI hallucinations are divided into intrinsic hallucinations, which provide answers different from actual facts, and extrinsic hallucinations, which fabricate answers about facts that do not actually exist. The answer suggesting to sprinkle non-toxic adhesive on pizza sauce to keep cheese attached corresponds to intrinsic hallucination, while the phenomenon of fabricating incorrect answers to questions about non-existent facts, such as the King Sejong MacBook Pro throwing incident, corresponds to extrinsic hallucination.


To reduce such AI hallucinations, it is necessary to train models with more accurate data and ask questions in ways that minimize logical flaws. Searching only for sources with accurate references is also one way to minimize hallucinations.


Recently, Retrieval-Augmented Generation (RAG) technology has also attracted attention as a solution to AI hallucinations. This technology combines the advantages of existing search services and generative AI, and can be understood as an integrated concept of search and generation. It generates the most appropriate answers based on pre-trained data and highly relevant data found through search engines, minimizing the generation of baseless or irrelevant answers.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top