Recipients of this year's Nobel Prize for their contributions to artificial intelligence (AI) research expressed serious concerns on the 7th (local time) about the so-called 'superhuman AI' being developed faster than expected, warning of a potential 'uncontrollable' situation.
On the 7th (local time), at the joint press conference for the 2024 Nobel Prizes in Economic Sciences, Chemistry, and Physics held at the Royal Swedish Academy of Sciences in Stockholm, Sweden, Jeffrey Hinton, Nobel Prize laureate in Physics and Professor Emeritus at the University of Toronto, Canada, is answering questions from the press. Photo by Yonhap News.
Geoffrey Hinton, a professor at the University of Toronto in Canada and co-recipient of this year's Nobel Prize in Physics, responded "Yes" when asked at a joint press conference for the Physics, Chemistry, and Economics laureates held at the Royal Swedish Academy of Sciences whether he believes superhuman AI can exist. He added, "This is something we have always believed would be realized."
Professor Hinton stated, "I used to think the development of super-intelligence would take much longer, but considering the recent pace of development, it seems it could happen within 5 to 20 years," and pointed out, "I think we need to seriously worry about how we can maintain control over AI."
Known as a pioneer in AI, Professor Hinton also mentioned when asked if there was anything he regretted if he could go back in time, "I wish we had considered safety earlier."
Demis Hassabis, CEO of Google DeepMind and co-recipient of the Nobel Prize in Chemistry, who was present at the press conference, expressed agreement with Professor Hinton's views.
CEO Hassabis explained, "Of course, my aspiration has always been to develop AI tools that contribute to scientific discovery," and added, "I believe AI will provide us with excellent tools to help address today's challenges faced by humanity, such as diseases, energy, and climate."
However, he also noted that he has been concerned about the risks associated with developing powerful general-purpose technologies, stating, "AI will be one of the most powerful technologies humanity has developed, so it is necessary to treat its risks very seriously."
They also shared their views on the need for AI regulation.
Professor Hinton pointed out that there are currently virtually no regulations regarding 'Lethal Autonomous Weapon Systems (LAWS),' where AI technology is first applied.
He analyzed that major governments are skeptical about regulation due to intensified arms races among key weapon suppliers such as the United States, China, Russia, the United Kingdom, and Israel.
CEO Hassabis said, "AI regulation is necessary, but it is also very important to regulate properly," explaining, "Because technology is evolving at a very rapid pace, regulatory methods discussed just a few years ago may no longer be appropriate for current discussions."
He added, "That is currently the most difficult part."
Professor Hinton received this year's Nobel Prize in Physics along with John Hopfield, a professor at Princeton University in the United States, for establishing the foundation of AI machine learning. Professor Hopfield did not attend the press conference.
Hassabis, known in Korea as the 'father of AlphaGo' for creating the AI model 'AlphaGo,' was selected as a Nobel Prize laureate in Chemistry along with DeepMind researcher John Jumper and others for developing the AI model 'AlphaFold,' which determines protein structures.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

