본문 바로가기
bar_progress

Text Size

Close

[Reporter’s Notebook] What the Nobel-Winning 'Godfather of AI' Emphasized in His Interview

[Reporter’s Notebook] What the Nobel-Winning 'Godfather of AI' Emphasized in His Interview

The hot topic of this year’s Nobel Prizes is undoubtedly artificial intelligence (AI). Following the Nobel Prize in Physics awarded to the ‘godfather of AI,’ the Chemistry Prize also went to AI. This is a clear signal that AI has become an unstoppable trend.


However, there is something to pay attention to at this point. Jeffrey Hinton, a professor at the University of Toronto and this year’s Nobel Prize in Physics laureate, is a scholar who first devised the concept of deep learning, the foundation of AI training. At the same time, he has warned that AI could become ‘killer robots’ controlling human society in the future.


In December last year, when the debate between ‘doomers’ (those who foresee destruction) and ‘boomers’ (those who advocate development) around AI was at its peak, Asia Economy conducted an interview with Professor Hinton through a total of eight email exchanges. When asked when AI might pose an ‘existential threat’ to humanity, he answered, “As soon as 5 years, or at the latest within 20 years.” He asserted that there is a 50% chance that AI’s reasoning ability will surpass humans within this period, and a 50% chance that AI will take control away from humans thereafter. Considering the rapidly advancing pace of AI development, the future Hinton predicted may arrive even sooner than expected.


What Professor Hinton most wanted from the interview with our publication was to raise the issue of AI to everyone at a time when even the direction of the discussion was uncertain. His answers were sometimes brief, but each line reflected deep contemplation that was difficult to translate easily into Korean. It was clear that he was very careful even with the choice of words.


While warning about the dangers of AI, Hinton did not see regulation as the answer. He expressed skepticism about its effectiveness, saying it is even harder to regulate than ‘nuclear weapons.’ He also believed that the development pressures from major countries and corporations could never be stopped. He expressed short-term concerns about election manipulation through fake AI images and agreed with Professor Yuval Harari of Hebrew University’s warning that AI could trigger a global financial catastrophe. In the data-driven financial market, if AI gains control and creates new financial instruments, the risks would be almost impossible to predict.


So, what should the world do now? Hinton’s answer was just one line: “It is time to work on finding ways to ensure (AI) does not want to control humanity.” When asked if this meant that, for now, the best we can do is to be cautious of AI’s dangers and continue discussions from each person’s position, he replied with a single word: “Yes.”


Professor Hinton, a Nobel laureate, is also described as a ‘whistleblower.’ His regretful remark, “Even if I hadn’t researched it, someone else would have,” was especially meaningful as it came at a time when the world was enthusiastic about the dazzling technological advances and conveniences brought by ChatGPT.


How should we use AI, which could be either a blessing or a curse? And what should we prepare for now? This concern should no longer be the sole responsibility of national leaders and AI developers.


[Reporter’s Notebook] What the Nobel-Winning 'Godfather of AI' Emphasized in His Interview Asia Economy January 2nd Newspaper Edition


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top