Exaggerated Hype Around Generative and Predictive AI
Examine What Data It Was Trained On
Question the Margin of Error
"It nurtures health and beauty."
In the streets of 19th and 20th-century America, "snake oil" was sold in droves as a miracle cure or tonic. Of course, none of the promised effects touted by the salesmen existed, and the product itself was never capable of delivering them. Even so, buyers were desperate, and above all, they wanted to believe the advertising. Sometimes, simply hearing that something was the "latest product" provided a sense of reassurance, even if it was hard to understand.
The new book "The AI Bubble Is Coming" draws a parallel between today's artificial intelligence (AI) market and snake oil. Co-authored by Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton University, the book offers criteria for distinguishing between "real innovation" and "fake technology" in the AI space. It emphasizes the need to strip away exaggerated marketing and choose technologies that actually work, rather than chasing illusions.
The book divides AI into two main categories: "generative AI" and "predictive AI." Generative AI can produce plausible sentences and boost productivity, but rather than true "intelligence," it is closer to probability-based generation. As a result, so-called "hallucinations"-convincing but false outputs-can occur. The problem is that generative AI is being used indiscriminately in areas where verification is insufficient, and its performance is being spread through exaggerated advertising. There have been numerous cases where errors in news writing, legal documents, and case citations have shaken public trust.
An even more serious area is predictive AI, which is used to directly "decide" aspects of human life such as hiring, public safety, and healthcare. Although it is often presented as if it can foresee the future, in reality, it can only make predictions within the range of observable data. Because the future of human society is inherently highly uncertain, no matter how much the quantity and quality of data increase, these limitations do not disappear.
In the United States, automated tools have been widely adopted from the early stages of recruitment processes, leading applicants to raise concerns about "opacity." Since they cannot know the criteria by which they are filtered out, resumes are often rejected before ever being reviewed by a human. Applicants respond by optimizing keywords to match the evaluation systems, while companies add countermeasures to prevent this, resulting in an ongoing back-and-forth. The moment technology enters society, people begin to respond strategically to it. This highlights that good predictions do not necessarily guarantee good decisions.
Before succumbing to apocalyptic fears that AI threatens human identity, we must be wary of "fake AI" that quietly infiltrates our wallets and institutions. Companies may end up spending budgets on expensive solutions without being able to explain their results, and in the public sector, there is an increasing risk that opaque decisions will be delegated to systems under the name of "AI administration."
The attitude the book calls for is surprisingly simple. While we cannot deny AI itself, we must boldly filter out "technologies that do not work." Snake oil salesmen disappeared not because people became healthier, but because repeated lies led to a collapse of trust. Rather than shrinking before claims that "AI will change every aspect of our lives," we should persistently ask questions such as "What data was it trained on?", "What is the margin of error?", and "Who is responsible when it fails?"
Above all, the book emphasizes the "acceptance of uncertainty." By recognizing the randomness and limitations of AI-generated results, it argues that better decisions and policies become possible. "We must strive to create truly open institutions that acknowledge the fact that the past cannot predict the future. Such a world is entirely possible-if only we can embrace the randomness that underpins our lives."
The AI Bubble Is Coming | Written by Arvind Narayanan and Sayash Kapoor | Translated by Kang Mikyung | Willbook | 420 pages | 24,800 KRW
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

