본문 바로가기
bar_progress

Text Size

Close

[AI New Year Interview] Jerry Kaplan: "AI, a Turning Point in Human History... Nothing to Fear"

Exclusive Interview with AI Scholar ② Optimistic 'Boomer' Kaplan on the AI Era
"Like Past Tech Waves, We Will Find Balance and Maximize AI Benefits"

"Artificial intelligence (AI) is not something to fear. On the contrary, it is something to be excited about. I am truly glad to witness an inflection point in human history."


Jerry Kaplan, a world-renowned AI scholar and professor at Stanford University who authored works such as ‘The Future of Artificial Intelligence,’ stated in an interview with Asia Economy ahead of the 2024 New Year that AI technology development will "accelerate the advancement of science, technology, art, business, and knowledge on a scale previously unimaginable."

[AI New Year Interview] Jerry Kaplan: "AI, a Turning Point in Human History... Nothing to Fear" Professor Jerry Kaplan, Stanford University
[Image provided by Jerry Kaplan]

Kaplan, a computer scientist and futurist, has long pointed out that the public is trapped in misunderstandings and fears about AI due to unnecessary information overload. In this interview, he again urged people not to be consumed by vague fears of AI but to "get used to sharing the world with highly intelligent machines." He added, "New problems may arise or existing problems may worsen, but as with previous waves of new technology, we can balance things out to reduce risks and gain benefits from AI development."


Regarding calls from some quarters to temporarily halt AI research and development, Kaplan firmly rejected the idea. He emphasized, "AI presents new risks and challenges to society, but the notion that it will kill humans is science fiction fantasy. AI is pure and simple automation. Whether AI causes disruption depends on us." On AI regulations being discussed by various countries, he advised, "We should not rush into regulation without understanding the benefits and risks of new technology."


However, Kaplan acknowledged the various risks that new AI technologies like generative AI could bring immediately. He said, "Such powerful technology can be potentially very dangerous if it falls into the wrong hands," citing concerns such as war, crime, fraud, and deepfakes.


Below is a Q&A with Professor Kaplan.


-Recently, the OpenAI incident has been interpreted as a conflict between boomers (pro-development) and doomers (pro-destruction) over the pace of AI development.


▲Describing the turmoil at OpenAI as a fight between boomers and doomers is an oversimplification. It was the result of poor corporate governance involving inexperienced executives and board members. Also, in any tech organization, there is naturally tension between those who want to launch products (commercialize) and those who want more testing.


Many governments and companies are trying to find ways to prevent AI from being used for harmful purposes. But the clear fact is that we still do not fully understand the risks of generative AI or how to mitigate them effectively.


-Some doomers have suggested temporarily pausing AI research and development.


▲I strongly disagree. AI presents new risks and challenges to society, but the idea that it will kill humans is pure science fiction. Generative AI may appear ‘human-like’ in many ways, but it is not an entity with independent thoughts, emotions, or desires. There is no need to worry about them taking over humanity because ‘they’ want nothing. ‘They’ do not exist. Therefore, ‘they’ will not come for ‘us.’ Such anthropomorphized fears are just imagination. AI is pure and simple automation. Whether good or bad, it is a tool people will use to pursue their goals. If AI causes disruption, it depends on us, not ‘them.’


-How would you respond to doomers who fear AI could control humanity?


▲Their concerns are based on the misunderstanding that intelligence, or intellect, is infinite. They also assume we are foolish enough to build unsafe systems that could cause great harm. As someone who has developed many technological products, I can assure you that building an AI system capable of destroying humanity would require a Manhattan Project-level effort (the U.S.-led nuclear weapons development during World War II). And even then, it is questionable whether it would succeed.


We have many warnings and numerous ways to mitigate risks. I support sound efforts to monitor and regulate AI risks. But if it were up to me, I would worry more about alien landings than the end of AI.


-But the risks of AI cannot be overlooked. Since the generative AI ‘ChatGPT’ craze, concerns and expectations around AI have grown.


▲Generative AI demands a completely new way of thinking about what machines can do. It will make people more productive, efficient, and effective. However, such powerful technology can be potentially very dangerous if it falls into the wrong hands. Authoritarians will use it to spread misinformation and suppress dissent. Scary new weapons will change the nature of war. It could be used for crime, fraud, or a surge in deepfakes such as pornography. Also, many people may develop emotional attachments to AI. They will find it hard to resist the temptation to turn to comforting AI instead of human interaction.


-What do you think about the impact on the labor market? Will AI take away jobs?


▲The best way to understand what will happen is to think of generative AI as a new wave of automation, like the Industrial Revolution or the advent of computers. Generative AI can replace or complement jobs such as writers, lawyers, doctors, graphic artists, and designers. At first, automation may seem like it is taking away human jobs. But it will increase the productivity of the remaining workers, leading to wage increases, cost reductions, increased demand, and the creation of new markets. It is not about kicking people out of workplaces but changing the nature of work.


-What direction should AI regulation take?


▲I believe we should not rush into regulation without understanding the real benefits and risks of new technology. The best approach is to establish a framework to measure and monitor the impact, then intervene with regulation when the need becomes clear. The recent AI regulation executive order released in the U.S. is based on this spirit. Researchers building systems above a certain computing power threshold must report test results to the government. This is a good first step.


-The future AI will bring is hard to imagine. What would you like to emphasize to Korean readers?


▲There is nothing to fear. On the contrary, it is exciting. It is highly likely to accelerate the advancement of science, technology, art, business, and knowledge on a scale previously unimaginable. Of course, new problems may inevitably arise or existing problems may worsen during this process. But as with previous waves of new technology, I believe we can balance things out to reduce risks and gain benefits.


In the future, when seeking the most objective and accurate information on any issue, it is likely we will ask computers rather than humans. Like the ‘Copernican Revolution,’ which made us accept that we are not the center of the universe, we need to get used to sharing the world with highly intelligent machines. I am truly glad and, at the same time, eagerly anticipating witnessing this inflection point in human history.


-Next month, a new book on generative AI (‘Generative AI - What Everyone Needs to Know’) will be released. What message did you want to convey?


▲My goal is to provide readers with a framework necessary to understand generative AI and the potential impacts over the coming decades. Whether generative AI meets our expectations depends on us. A more nuanced understanding of the strengths and weaknesses of new technology is needed. That is exactly what my book aims to address. The Korean version’s publication date is not yet decided, but I hope it will happen within 2024.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top