본문 바로가기
bar_progress

Text Size

Close

[AI New Year Interview] Geoffrey Hinton: "There Is a 50% Chance That AI Surpassing Humans Will Seize Control"

Exclusive Interview with Leading AI Scholar ①
'Doomer' Hinton Warns of Killer Robots
"Existential Threat Concerns... Regulating AI Is Much Harder Than Nuclear Weapons"

"There is a 50% chance that artificial intelligence (AI) will surpass human intelligence within the next 5 to 20 years. There is also a 50% chance that AI, having exceeded human intelligence, will seize control from humans."


Geoffrey Hinton, the "Godfather of AI" and professor at the University of Toronto who first conceptualized deep learning-the foundation of AI training-issued this warning about the dangers of AI in a recent New Year's interview with The Asia Business Daily. Professor Hinton, a recipient of the Turing Award-often referred to as the Nobel Prize of computer science-made headlines last year when he abruptly left Google and expressed regret over his decades of AI research. At that time, the world was still marveling at the technological advances and convenience brought by ChatGPT.

[AI New Year Interview] Geoffrey Hinton: "There Is a 50% Chance That AI Surpassing Humans Will Seize Control" 'AI Godfather' Geoffrey Hinton, Professor at the University of Toronto [Image provided by Professor Geoffrey Hinton]

On this day as well, Professor Hinton likened AI to "nuclear weapons," warning that AI, once it becomes more intelligent than humanity, could become a "killer robot" that controls human society. Drawing an analogy between how AI operates and the neural networks of the human brain, he stated, "There is a 50% chance that AI's reasoning ability will surpass that of humans in as little as 5 years, or at most 20 years." He further predicted that AI, having learned everything created by humans and autonomously generating and executing computer code, "also has a 50% chance of seizing control from humans." In this case, he argued, AI would become an existential threat to humanity not only in science fiction movies but also in reality.


The problem is that, at present, not only is there no response to these dangers of AI, but even the direction of discussion remains unclear. Although Professor Hinton is regarded as a leading "doomer" (one who predicts catastrophe) in the AI field, he does not advocate for a temporary halt to AI research, unlike some other doomers. Regarding ongoing discussions about AI regulation in various countries, he said, "It is very unclear what might be effective," adding, "AI is developing so rapidly that it is much harder to regulate than nuclear weapons. Considering the benefits AI brings, the pressure to develop it is enormous." In other words, he sees it as realistically difficult to halt or regulate AI research itself.


When asked what the world should do now, Professor Hinton simply replied, "It is time to try to find ways to make sure AI does not want to control humanity." When further asked whether this answer means that, for now, the best course of action is to remain vigilant about the dangers of AI and continue discussions from each person's position, he answered, "Yes." Previously, Professor Hinton had cited his desire "to discuss this issue (the threat of AI) freely" as the reason for resigning from Google. He also accepted this interview with The Asia Business Daily because he wanted to warn Korean readers about the dangers of AI and encourage collective reflection on the issue.


Amid accelerating global competition in AI development, particularly between the United States and China, Professor Hinton also stated that there is a way to encourage China to participate in the safe development of AI. He said, "There is one threat that the US and China can cooperate on," explaining, "It is the threat of AI controlling humanity. Neither side wants this." He emphasized that as existential threats draw nearer, international cooperation will become inevitable.


The risks posed by AI are not just a matter for the distant future. In this interview, Professor Hinton also pointed out specific immediate dangers that generative AI can cause. He first mentioned, "In the short term, the worst consequence of AI will be its ability to easily deceive voters with fake images, videos, and so on," raising the possibility of election manipulation.


He also agreed with a recent warning by Yuval Harari, professor at the Hebrew University, that AI could trigger a global financial catastrophe, saying, "He could be right." In data-driven financial markets, if AI gains control and begins creating new financial instruments, it could become nearly impossible to predict the risks. However, regarding concerns that AI will take away human jobs, he drew a line, saying, "It will make people much more efficient." He added, "There may be fewer jobs, but it could also increase people's productivity."


Regarding the conflict over AI development that came to light during the sudden dismissal and reinstatement of Sam Altman, CEO of OpenAI, Professor Hinton declined to comment, saying, "I do not want to speak about the situation as I do not know the details." At the time, the incident was seen as exposing a rift both inside and outside the industry between doomers, who believe AI could pose an existential threat to humanity, and "boomers" (development advocates), who argue that AI technology will advance humanity and that fears of killer robots are mere imagination. It also raised more fundamental questions about how the world should safely develop and utilize AI. Ilya Sutskever, who joined the board in dismissing CEO Altman out of caution over rapid AI development and commercialization, is a disciple who shares Professor Hinton's philosophy on AI.


As a leading scholar among the AI doomer camp, Professor Hinton left one rebuttal to the boomers who underestimate the threat of AI in this interview: "Have you ever seen something more intelligent being controlled by something less intelligent?" He emphasized that there is no such example throughout history, and that is why we must be vigilant about the dangers AI may bring, starting now.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top