본문 바로가기
bar_progress

Text Size

Close

[The World on the Page] A Society Where Artificial Intelligence Delivers Oracles Becomes Unhappy

Humans Are Prone to Mistakes
The Temptation to Rely on AI for Humanity's Problems
Happiness Lies in Failing and Correcting Ourselves
An AI Oracle Totalitarian Society Brings Unhappiness

[The World on the Page] A Society Where Artificial Intelligence Delivers Oracles Becomes Unhappy

The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton. The reason for the award was their contribution to the advancement of machine learning, which forms the foundation of artificial intelligence (AI), through the application of physics. Underlying this judgment is the anticipation that AI will become a turning point in human history. Truly, it is the era of AI.


As AI has rapidly approached us, a unique phenomenon has also emerged. Expectations and fears about the singularity have exploded. There is a growing discourse that the day will soon come when AI surpasses human intelligence, bringing decisive changes to human civilization, and even catastrophe. Professor Hinton himself said, "AI could become smarter than humans and control us," and expressed regret over AI research.


The term singularity originally comes from mathematics and physics, referring to a point where general functions or physical laws no longer apply, such as the center of a black hole. The first person to use the concept of singularity as a metaphor for history was Polish mathematician Stanisław Ulam. In 1958, while commemorating John von Neumann, he said, "As technological development accelerates and the way of human life changes, we approach an essential singularity beyond which the human history we know can no longer continue."


In 1983, science fiction writer David Brin was the first to connect this term with AI. "When that day comes, human history will reach the singularity, and the world will far surpass our level of understanding." The singularity became widely known to the public thanks to eccentric futurist Ray Kurzweil. In his 2005 bestseller The Singularity Is Near (Kim Young-sa), he declared that AI would surpass human intelligence by 2045.


In June of this year, Kurzweil brought that day closer in his new book The Singularity Is Closer published in the U.S. He predicted that by 2029 humans will begin merging with machines to become cyborgs, becoming a million times smarter through brain-computer interfaces, and by 2045 will become one with AI, achieving eternal immortality. "Our intelligence will spread throughout the universe, and ordinary matter will be transformed into computronium, matter organized at the ultimate computational density."


The story is addictive because no one can endure a miserable present without dreaming of a better future. Using future expectations as stepping stones to rewrite the past and infuse direction and meaning into the present is the uniqueness of human intellect. As Aristotle said, a narrative with a plausible beginning, middle, and end gives us catharsis. This is why, despite the shock and fear of AI and the compassion and concern for humans who will go through ups and downs with its power, we are drawn to the singularity narrative.


However, in The Philosophy of Correctability (Medici Media), Japanese philosopher Azuma Hiroki criticizes the singularity narrative as baseless mysticism that brings dangerous totalitarianism. The scriptures of singularity believers like Kurzweil, Nick Bostrom, and Elon Musk say this: humans united with AI will eventually shed their bodies to become superintelligences, spreading beyond the solar system at the speed of light throughout the universe, ultimately awakening the entire cosmos. It is close to a religious apocalypse with no intellectual rigor.


Kurzweil even says, "Even if evolution cannot reach the pinnacle of a god, it clearly moves toward the concept of god. Therefore, liberating human thought from biological constraints can essentially be called a spiritual endeavor." He thus admits that his words are more theological and religious than scientific and rational. Bostrom focuses on the apocalyptic narrative that will arise when AI gains consciousness. On that day, AI will monopolize Earth's resources by dedicating itself to self-preservation with overwhelming intelligence, leading to human extinction. Much of the fear we feel about AI has been influenced by this narrative.


But upon closer inspection, there is almost no basis for it. It is just a repetition of "if technology advances rapidly, someday..." Of course, AI can improve. However, no one seems to know whether current technology is connected to the singularity. Conversely, the limits of machine learning may suddenly appear, causing a long stagnation. The history of science shows many sudden stagnations. The singularity narrative offers no guarantee other than the prophecy that if infinite resources are invested, a breakthrough will appear. Perhaps we are currently in a "tulip bubble" state.


Nevertheless, regardless of rational grounds, singularity thinking is rapidly spreading throughout society. One extreme of this is the "AI Oracle Theory," based on data fundamentalism. The AI Oracle Theory is a narrative that proposes relying on AI, which always makes optimal decisions, instead of humans who are biased and prone to mistakes, to handle humanity's problems. The recent trend of asking AI for answers whenever difficulties arise reminds one of the ancient Greeks consulting the Oracle of Delphi for the will of the gods. AI judges replacing legal decisions, AI doctors replacing diagnoses, and AI politicians replacing parliaments that repeatedly engage in unnecessary conflicts or presidents who repeatedly ignore issues are evolved forms of this.


According to Azuma, AI optimization converges to a statistical normal distribution. This can only be realized as a form of totalitarianism that ignores small goals like individual happiness. The problem is that humans are inherently rebellious and will choose to resist blindly in such a society. Dostoevsky once said, "Humans prefer to act as they wish, not by the command of reason and benefit. They can even do so against their own interests, and sometimes must."


We are all unique, singular, and peculiar beings. Even when following navigation, we feel compelled to take a side alley. We are satisfied not by always following the average but sometimes twisting and turning it mischievously. We are happy when we make mistakes and fail, and live correcting them. A totalitarian society where AI delivers oracles makes us unhappy. Dostoevsky warned, "One must live life before logic, and only then will one understand the meaning of life." Whether it is the singularity narrative or the AI Oracle Theory, it is time to critically examine all narratives surrounding AI and soberly reflect on its present, future, possibilities, and limitations.


Jang Eun-su, Publishing and Literary Critic


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top