본문 바로가기
bar_progress

Text Size

Close

[Kim Dae-sik Column] Hallucination, the Paradox of Human-like AI

② Fake Stories Created by the Brain and AI

[Kim Dae-sik Column] Hallucination, the Paradox of Human-like AI

When studying the brain, there comes a moment when you have a chillingly fascinating thought: our brain is inside our head. It is so obvious that it is hard to even feel it, but the more you think about it, the more amazing it becomes. From birth to death, the brain is trapped in a dark "prison" called the skull. This means our brain cannot directly see the outside world or reality. We cannot directly experience the face of a loved one or the most beautiful scenery in this world as they truly are.


Modern neuroscience claims that information transmitted through the eyes, nose, ears, and skin is processed by neurons inside the brain, and the interpretation created this way is what we experience. In other words, the brain does not see the world itself but interprets the electrical responses of neurons occurring inside it. Thanks to this, we do not see the world as it is but only through an evolutionary pair of colored glasses called the brain.


However, if reality can only ever be experienced through this evolutionary lens, a problem arises: the information given to us cannot be perfect. For example, bats perceive the world through reflected ultrasound, and snakes can see in the dark using infrared sensing, but humans lack both of these abilities.


Imperfect Evolutionary Lens of Humans

If the human evolutionary lens cannot be perfect, there must be information that is missed. Then how does the brain show a "perfect-looking" reality based on imperfect information? Fortunately, through evolution and experience, the brain has already learned a lot of data, so it can infer and generate the missing information based on this learned data. However, the generated data does not necessarily have to be true. The brain is not a machine designed to distinguish truth from falsehood. If it helps survival, the brain can create and believe falsehoods as well.


The brain generates content that does not exist. There are many examples, but perhaps the most famous is the experiment by Nobel Prize winner Roger Sperry. The human brain is divided into the left hemisphere, which has language ability, and the right hemisphere, which does not. Between the 1950s and 1960s, Sperry conducted experiments on patients who had their left and right hemispheres separated to treat severe epilepsy, and he obtained shocking results.


Left Brain Creating Fake Stories

Imagine showing a winter landscape only to the right hemisphere. Although the right hemisphere sees the winter scene, it cannot verbally express what it saw because it lacks language ability. However, if asked to select any photo on the desk using the left hand controlled by the right hemisphere, most choose photos related to winter scenes.


Now, ask the left hemisphere why it chose the winter landscape photo. Since the left hemisphere has not seen the winter scene, the correct answer would be "I don't know." But surprisingly, the left hemisphere does not say it does not know. Instead, it begins to hallucinate (generate manipulated information). It might say it remembered a ski trip from last year or recently watched a movie featuring winter scenes. Upon checking, none of these events actually happened.


In the end, the left hemisphere was creating fake stories to justify its actions, the cause of which it could not know. According to Sperry's experiment, the human brain may not try to understand reality as it is but instead is a machine that hallucinates "fake memories" and "fake stories" to justify and make predictable the brain's imperfect interpretations of reality.

[Kim Dae-sik Column] Hallucination, the Paradox of Human-like AI
The Fatal Problem of Generative AI: 'Hallucination'

Generative AI, which has recently gained much attention thanks to ChatGPT, has a fatal problem: it often creates fake stories that are not true. This phenomenon, called hallucination, can now be explained as follows: "ChatGPT also cannot know the perfect causal relationships of the world."


Then, like the brain, ChatGPT, which does not know the truth, might also hallucinate stories that best justify and make its choices predictable. Ultimately, the hallucination ability of generative AI could paradoxically be evidence that humanity has begun to create AI that truly resembles humans in a meaningful way.


Daesik Kim, Professor, Department of Electrical Engineering, KAIST


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top