Exclusive Interview with AI Scholars
- 'Dumeo' Jeffrey Hinton Warns of Killer Robots
- 'Boomer' Jerry Kaplan Optimistic About AI Applications
"There is concern that artificial intelligence (AI) surpassing human intelligence might seize control from humans. (Geoffrey Hinton, Professor at the University of Toronto)"
"The idea that AI suddenly awakens and decides to kill humans is just science fiction (SF) fantasy." (Jerry Kaplan, Professor at Stanford University)
Is it a blessing for humanity or a seed of destruction? The OpenAI incident, summarized by the global sensation of the generative AI ChatGPT last year and the drama of CEO Sam Altman's dismissal and reinstatement, left us with a more fundamental question. How should we utilize AI, which could be either a blessing or a curse, going forward?
Ahead of the New Year 2024, Asia Economy interviewed two global AI scholars divided into the so-called AI 'doomer' and 'boomer' camps. From the doomer camp, Geoffrey Hinton, known as the 'father of deep learning' and the 'godfather of AI,' Professor at the University of Toronto, participated. From the boomer camp, Jerry Kaplan, an AI scholar and futurist who has argued that the benefits of AI can be maximized and utilized, Professor at Stanford University, responded to the interview.
Professor Hinton warned that AI, becoming smarter than humans, could seize control from humans and "may pose an existential risk within 5 to 20 years at the earliest." Having previously left Google expressing fear of the day when 'killer robots' become reality, he also rebutted those who underestimate the threats AI may bring by asking, "Have you ever seen something with higher intelligence controlled by something with lower intelligence?"
On the other hand, Professor Kaplan stated, "Generative AI may appear human-like, but it is not an entity with independent thoughts, emotions, or desires," drawing a clear line by saying, "If AI causes chaos, it depends on us, not 'them'." While supporting sound efforts to examine and regulate AI risks, he countered, "Worry more about alien landings than the end of AI." He acknowledged that various problems may arise as AI development accelerates but remained optimistic, saying, "As with previous waves of new technology, we will balance to reduce risks and gain benefits from AI development."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[AI New Year Interview] Hinton "Will Take Away Human Control" vs Kaplan "SF Fantasy"](https://cphoto.asiae.co.kr/listimglink/1/2024010214481915933_1704174498.png)

