본문 바로가기
bar_progress

Text Size

Close

[New Year Interview②] Marcus: "AGI Is Possible, but Today's AI Is Just a Pattern Recognizer... AI Investment Is as Overheated as the Dot-Com Bubble"

Gary Marcus, Professor at New York University
LLMs Are Sophisticated Imitators: Limits in Generalization and Reliability
Deep Learning Alone Cannot Achieve AGI: Need for a Neuro-Symbolic Shift
GDP Growth Driven by Speculative Investment
'Work

"The large language models (LLMs) that currently underpin artificial intelligence (AI) may appear impressive, but at their core, they are closer to an 'extended version of imitation' that generates results by combining existing data. There are clear limitations in terms of accuracy and reliability, yet market expectations and investment volumes are far outpacing the actual maturity of the technology."


[New Year Interview②] Marcus: "AGI Is Possible, but Today's AI Is Just a Pattern Recognizer... AI Investment Is as Overheated as the Dot-Com Bubble" Gary Marcus, Professor Emeritus of Psychology and Neuroscience at New York University (NYU). Provided by Professor Marcus

Gary Marcus, Professor Emeritus of Psychology and Neuroscience at New York University (NYU) and a widely recognized AI critic based on cognitive science in the United States, offered this assessment of the current AI boom in a recent New Year interview with The Asia Business Daily conducted via video call. He has spent more than 30 years researching the structural limitations and risks of AI, and has consistently argued that it is difficult to achieve artificial general intelligence (AGI) through deep learning-centric approaches alone.


Professor Marcus described LLMs, which are at the core of today’s AI systems, as being "closer to pattern recognizers than to systems that truly understand new concepts." As a result, he explained, these models are prone to errors in unfamiliar situations or real-world environments outside their training scope, and have yet to achieve the level of reliability that society expects.


This perspective stands in stark contrast to the optimism that has recently spread among some AI companies and investors, particularly in Silicon Valley. The industry claims that LLMs have already demonstrated tangible utility in limited areas such as productivity improvement, and that their generalization abilities are rapidly improving through model scaling and advances in multimodal and agent technologies. However, Professor Marcus, who has maintained a relatively critical minority viewpoint on these mainstream opinions, has consistently argued that such partial successes do not necessarily translate into general reliability.


He does not deny the long-term potential for AI development itself. However, he emphasized that "achieving AGI will require a structural transformation, such as 'neuro-symbolic AI,' which combines traditional symbolic AI with neural network-based approaches."


He expressed particular concern about the overheated investment surrounding the AI industry. He stated, "A significant portion of recent U.S. gross domestic product (GDP) growth has stemmed not from genuine productivity gains due to AI, but rather from speculative investment driven by AI hype." He noted that, apart from semiconductor companies like Nvidia, there is still limited clear evidence that AI is actually generating massive profits.


In particular, he pointed out that generative AI is leading to a phenomenon he calls "work slop," where, rather than improving work efficiency, it actually decreases productivity by increasing the need for error correction and rework.


Professor Marcus concluded by warning, "Even during the dot-com bubble, the direction of the technology was correct, but the timing of the investment was wrong. AI will also succeed someday, but the current market is racing far ahead of that point in time."


The following is a Q&A with Professor Marcus.


-How do you assess LLMs, which are at the core of today’s AI systems?

▲The most fundamental limitation of LLMs is their ability to generalize. These models excel at producing similar results based on previously seen data, but they are weak when it comes to deep understanding or reasoning in new situations. Thus, when faced with situations that are not sufficiently represented in their training data, they are prone to making mistakes. While they do not simply copy, they are, in essence, highly sophisticated imitators.


-Where do you think the current deep learning-centric approach stands in the discussion about AGI?

▲It is difficult to achieve AGI using only the current deep learning approach. While I believe the emergence of AGI is possible, it will require a much more sophisticated integration of traditional symbolic AI and neural network-based approaches. A structural breakthrough such as so-called 'neuro-symbolic AI' is needed, and I believe new ideas that have not yet emerged will also be required.


-What is your view on the current social and market evaluation of AI technology and industry?

▲Overall, AI is currently overhyped. The most dominant technology, LLMs, still falls short in terms of accuracy and reliability. Even in areas such as computer programming, which are considered to work relatively well, errors remain frequent. LLMs are essentially pattern recognizers, and most real-world problems require abilities that go beyond this.


-There are claims that AI has boosted productivity and driven U.S. economic growth.

▲That claim is quite misleading. A significant portion of current GDP growth has resulted not from actual productivity gains, but from speculative investment in AI. According to several studies, including those from the Massachusetts Institute of Technology (MIT), most companies have not achieved visible returns on their AI investments. (According to MIT's "2025 AI in the Enterprise Landscape" report, which surveyed companies along with 300 publicly reported generative AI adoption cases, companies have invested tens of billions of dollars, but 95% of them have not seen visible returns on investment.) After the emergence of ChatGPT, there were expectations that "AI would do the work of ten employees," but in reality, cases where errors have led to decreased work efficiency are increasing.


-Are the market expectations and stock valuations of AI companies justified? Some have raised concerns reminiscent of the dot-com bubble.

▲I do not think they are justified. Nvidia is an excellent company, but its current valuation is based on the assumption that other companies will generate massive profits using Nvidia chips. However, there is still insufficient clear evidence to support that assumption. The circular financial structure, in which Nvidia invests in OpenAI and OpenAI purchases large volumes of Nvidia semiconductors, also risks exaggerating actual demand. In terms of overheated investment, the current AI boom is very similar to the dot-com bubble.


-What impact do you think AI will have on the labor market?

▲Short-term fears are exaggerated. At present, it is difficult to say that AI is replacing jobs on a large scale. Some companies are using AI as an excuse to cut staff for cost reduction. In 20 years, however, the situation is likely to change. AI development will eventually reach a phase where it genuinely replaces the labor market, and this will be a challenge that society as a whole will inevitably have to face.


-How do you view the short- and long-term risks of AI?

▲In the short term, there are already real risks such as cybercrime, misinformation, and non-consensual deepfake pornography. In the long term, the most important challenge will be the alignment problem-controlling AI in accordance with human intentions. This risk is especially acute in fields where the cost of errors is critical, such as the military. I do not believe that AI will lead to human extinction, but the potential for serious harm to humanity through the spread of misinformation, bioterrorism, or military miscalculation is entirely realistic.


About Professor Gary Marcus

Professor Gary Marcus is a world-renowned cognitive scientist and AI critic. As Professor Emeritus of Psychology and Neuroscience at NYU, he has analyzed the structural limitations and potential risks of AI based on his research since the mid-1990s into how humans understand concepts and learn rules. He is considered a leading figure in the United States for consistently warning against the overvaluation of AI in public discourse and raising issues of safety and reliability. His major books include "The Algebraic Mind," "Rebooting AI," and others, some of which have been translated and published in Korea. He earned his bachelor’s degree in cognitive science from Hampshire College and his PhD in the same field from MIT.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top