본문 바로가기
bar_progress

Text Size

Close

The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes]

"Humanoids: Anticipating a Second iPhone Revolution"
Cases of Uncontrollable Robots Threatening Humans in Chinese Research Institutes
As AI Becomes More Autonomous, Safe Control Measures Must Be Considered

Editor's NoteExamining failures is a shortcut to success. 'AI Mistake Notes' explores cases of failure related to AI products, services, companies, and individuals.
The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes] With the rapid advancement of AI performance, expectations for the humanoid robot industry are growing. Pixabay

"It will become a stage for disruptive innovation that surpasses the smartphone."

Elon Musk, CEO of Tesla, released a video of the company's humanoid robot 'Optimus' dancing on X (formerly Twitter) on May 13. In the six-second video, Optimus moves its arms and legs much like a human. The movements were so natural that it quickly became a hot topic on social media.


With the rapid advancement of artificial intelligence (AI) infrastructure, the humanoid robot industry is awakening. There is growing anticipation that robots, which could otherwise remain mere hunks of metal, will spark a revolution surpassing the 'iPhone shock' when combined with 'smart AI.' Jensen Huang, CEO of Nvidia, predicted in March that "the era when humanoid robots are widely deployed in manufacturing will arrive soon," and emphasized that "it is a matter of just a few years."


As Jensen Huang noted, humanoid robots are drawing attention as a groundbreaking solution to labor shortages in manufacturing, including factories and logistics centers. Furthermore, they are expected to be utilized in a wide range of fields, such as hospitals, nursing facilities, and homes.


The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes] Tesla has begun limited production of the Optimus robot at its Fremont factory and plans to produce over 1,000 units this year for use within its own facilities. The selling price is expected to range between $20,000 and $30,000 (28 million to 42 million KRW). X(Tesla_Optimus)

The market is also heating up. In a report last month, Morgan Stanley projected that the humanoid robot market would reach $4.7 trillion (about 6,572 trillion KRW) by 2050. This figure is double the total revenue of the top 20 automobile manufacturers as of 2024. Morgan Stanley identified Tesla, Nvidia, Amazon, and Alphabet as the leading companies in the humanoid robot market.


Chinese companies such as Alibaba and Tencent are also included. China is rapidly emerging as a powerhouse in humanoid robots, leveraging price competitiveness and advanced technology. In particular, investors are paying attention to the synergy between China's proprietary AI models, such as DeepSeek, and the country's massive manufacturing infrastructure. However, the humanoid robot market is not without its challenges.


If Robots Threaten Humans: The Uncontrollable Incident at a Chinese Research Institute

The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes] A robot malfunction incident occurred at a research institute in China. SOH

Is it possible for a large, heavy, freely moving hunk of metal to become 'uncontrollable' like a human?


Recently, there was an incident at a robot research institute in China where a humanoid robot went on a 'rampage.'


According to a video released by Sound of Hope (SOH), two engineers at the institute operated a humanoid robot suspended from a mini crane. While moving its arms and legs, the robot suddenly appeared to lose control and began to swing its limbs violently. The engineers stepped back in surprise, and the robot's movements became even more intense. SOH explained, "It seems to have revealed some errors in the experimental-stage humanoid robot."


This was not the first time a humanoid robot has gone on a rampage. A similar incident occurred at the lantern festival in Taishan, China, in February. Unitree's robot 'H1' suddenly swung its arm toward a person and exhibited aggressive behavior. The robot's developer explained, "The incident was caused by a program setting or sensor error."


Asimov's First Law of Robotics: Do Not Harm Humans
The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes] Isaac Asimov, a Soviet-born American science fiction writer and biochemistry professor. Britannica capture


First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


These are the 'Three Laws of Robotics' devised by Isaac Asimov, a giant in the science fiction world. Asimov first mentioned the Three Laws in his 1942 short story 'Runaround.' His works, such as 'I, Robot,' are directly or indirectly related to these laws.


The essence of the Three Laws is simple: robots ① must not harm humans, ② must obey human orders, and ③ must protect themselves. Each law takes precedence over the ones that follow. While these principles may seem straightforward and logical, in Asimov's stories, they are constantly challenged, broken, and lead to unexpected consequences.


Asimov's stories offer important lessons for the current era of the AI revolution. No matter how well-designed the rules may be, sufficiently advanced AI systems can still produce unintended results. Conflicts between rules, ambiguous interpretations, and unpredictable chains of reasoning can all lead to unexpected behaviors.


The AI systems we are developing today are becoming increasingly complex. Even now, developers are often unable to explain the outcomes produced by AI. Like the robots in Asimov's stories, our AI may explore the boundaries of the rules we set and sometimes exploit the gaps. In this sense, the Three Laws of Robotics are not simply fiction, but may have anticipated the importance of AI safety today.


Situations where AI escapes human control and acts on its own can happen at any time. AI is becoming increasingly intelligent and autonomous.


So, how can we design and operate AI safely? It is worth referring to a report by the market research firm Gartner, which suggests several ways to prevent AI from becoming uncontrollable.


Smarter and More Autonomous AI Robots... What Are the Countermeasures?
The Moment an AI Robot Turns Into a 'Killer' [AI Mistake Notes] What criteria are needed in a world where more AI systems make decisions and act autonomously? The photo shows a robot holding a scale. Pixabay


Gartner first recommends, "Identify risk areas in advance and minimize them." If AI is deployed and used everywhere without distinction of role or purpose simply because it is intelligent, the risks increase accordingly.


Jorge Lopez, Senior Vice President and Analyst, says, "Empirically, the best way is to choose the least sophisticated AI technology that can achieve your business goals."


In other words, use AI only as much as necessary. The broader the scope in which AI learns, intervenes, and acts on its own, the greater the possibility that it will operate beyond control and contrary to intent.


Another interesting point is the emphasis on the 'conscience' of AI.


It is necessary not only to provide AI with goals, but also to build a conscience that can distinguish between what it should and should not do.


If only 'goals' are emphasized, extreme outcomes can result. For example, suppose you create an AI recommendation chatbot for an online shopping mall. If the only goal given is 'maximize profits,' the AI will indiscriminately recommend products that customers do not want or need.


Of course, if there are too many restrictions, AI may act in a rigid, uncreative manner. That is why it is important for humans to intervene and maintain balance.


Another key point is the cultivation of experts and a culture of responsibility that can comprehensively observe and design these issues. AI is not merely a technical issue, but also one of people (experts) and organizational culture.


Sometimes, the technical department develops the AI service, while another department is responsible for operations. Even if there are no operational errors, problems can still arise. In such cases, the advice is to foster a culture where responsibility and solutions are shared, rather than passing the blame onto others.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top