본문 바로가기
bar_progress

Text Size

Close

[The Future of Work, The Impact of Physical AI] ① Jerry Kaplan: "Physical AI and Job-Apocalypse Fears Are Overblown... We Have a 20-Year Golden Time for Social Acceptance"

Physical AI: The End of Work or a Step in Human Evolution?
Asking Four Global Scholars and Policy Experts About "The Future of Work"

Editor's NoteThe era of "physical AI," in which artificial intelligence (AI) that once stayed on screens now takes on robotic bodies and performs real-world labor, has arrived. Recently, the labor union at Hyundai Motor strongly protested the introduction of Atlas, a bipedal robot, underscoring that conflict between technological progress and job insecurity is no longer hypothetical. Is physical AI the end that heralds the disappearance of work, or is it an evolutionary step that will expand humanity's capabilities? The Asia Business Daily asks four global scholars and policy experts, including Jerry Kaplan, professor at Stanford University, Ken Goldberg, professor at UC Berkeley, Seo Yongseok, professor at KAIST, and Assemblyman Cha Jiho of the Democratic Party of Korea, about the future of human labor. Over the course of four installments, we explore alternatives and paths to coexistence in an era marked not only by technological shock but also by a demographic cliff.

Jerry Kaplan, a world-renowned authority on artificial intelligence (AI) and futurist, and a professor at Stanford University in the United States, has issued a cautionary message about excessive expectations surrounding robots and other forms of physical AI, as well as about apocalyptic claims that employment will come to an end. He pointed out that there is a considerable time lag between the pace of technological progress and its actual application in industry and society, stressing that a prudent approach is necessary.


[The Future of Work, The Impact of Physical AI] ① Jerry Kaplan: "Physical AI and Job-Apocalypse Fears Are Overblown... We Have a 20-Year Golden Time for Social Acceptance" Jerry Kaplan, Professor at Stanford University, United States. Jerry Kaplan

In a recent written interview with The Asia Business Daily, Kaplan said, "There is clearly a bubble in the current global AI boom, and it will inevitably burst someday," adding, "I acknowledge the potential of AI, but it will take another 10 to 20 years before the technology is fully integrated into the human world." He explained that in the case of physical AI in particular, it will take a significant amount of time before it can be safely integrated into real industrial settings and everyday environments.


He also believes that concerns about the job market are overstated. "It takes a long time for new technologies to adapt to organizations such as companies and to be socially accepted," he said, adding, "Fears of a sudden end to employment are exaggerated." He noted that it is worth remembering that it took more than 20 years after the advent of the internet for it to trigger structural change across industries. His point is that physical AI is also likely to spread gradually, accompanied by a comprehensive redesign of institutions, organizations, and work environments.


Kaplan also highlighted the gap between the achievements of generative AI and the current reality of robotics. "Today's astonishing generative AI systems are trained on massive amounts of text, that is, human language," he said. "But just as someone cannot immediately ride a bicycle simply because you explain in words how to ride one, machines such as robots need a much broader range of knowledge and understanding of the physical world in order to work safely alongside people in real environments." In other words, learning based solely on language is not enough to fully handle complex physical environments.


He continued, "Recognizing this, AI researchers are trying to adapt the power of generative AI to physical machines such as robots," but he assessed that "with the exception of a few special cases like self-driving cars, relatively little progress has been made on this problem." His diagnosis is that for general-purpose robots to secure a level of autonomy comparable to humans, they will need additional technological accumulation not only in algorithms but also across sensing, control, and safety verification.


On ethical issues, he took a relatively calm stance, premised on the possibility of technical control. "I do not view ethical challenges as a major barrier," he said. "If we can achieve complete control over problems of physical control, then the ethical issues of physical AI will not be more difficult than those we see with large language models." He made it clear that, ultimately, the key is not the speed of the technology, but how precisely we can understand and control the physical world.


[The Future of Work, The Impact of Physical AI] ① Jerry Kaplan: "Physical AI and Job-Apocalypse Fears Are Overblown... We Have a 20-Year Golden Time for Social Acceptance" Jerry Kaplan, Professor at Stanford University, United States. Jerry Kaplan

When asked how manufacturing powerhouses such as Korea should respond to a future led by artificial intelligence (AI), Kaplan emphasized a "qualitative superiority strategy" based on accumulated industrial competitiveness. "Korea has a long history of leveraging and mastering new technologies, from shipbuilding to computer chips and displays," he said. His assessment is that Korea's asset lies not merely in adopting technologies, but in embedding them in industrial sites and commercializing them.


He drew a clear line against being consumed by speed competition. In particular, he said, "After taking the time to understand new technologies and how they can be productively employed, you should create high-quality products that meet market demand," adding, "In the end, what wins is not the first product to reach the market, but the best one." His point is that in the long run, completeness and market fit, rather than early technological entry, determine competitiveness.


In line with this industrial strategy, he also suggested what individuals should prepare for. Kaplan cited two essential qualities that future generations must possess in the AI era. One is the knowledge and capability to understand and manage AI systems in order to obtain desired outcomes; the other is the ability to connect with others in an authentic way.


"It is important to predict and understand how others feel and what they want, and to connect emotionally through empathy," he said. "Machines may be able to mimic such traits, but people will ultimately want to connect with other human beings, not with machines." He then added, "Robots are not going to marry our children." In other words, a world-leading AI authority is reiterating that the more technology advances, the more crucial uniquely human capacities for empathy and relationship-building will become as sources of competitiveness. Even in an era when algorithms replace much of our work and decision-making, what ultimately determines outcomes is trust and connection between people.

About Professor Jerry Kaplan
Jerry Kaplan is a Fellow at the Center for Legal Informatics (CodeX) at Stanford University in the United States and a Visiting Lecturer in the Department of Computer Science. He is also a Silicon Valley entrepreneur, futurist, and bestselling author. In 1979, he earned his Ph.D. in Computer and Information Science from the Graduate School of the University of Pennsylvania in the United States. His major works include "Generative Artificial Intelligence: What Everyone Needs to Know" (2024), "Artificial Intelligence: What Everyone Needs to Know" (2016), "Humans Need Not Apply" (2015), and "Startup: A Silicon Valley Adventure" (1995).


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top