Scholars and Entrepreneurs Debate AI Regulation Methods
Altman and Others Positive on AGI... Warn Against Excessive Fear
Musk and Hinton Compare Advanced AI to 'Nuclear Bomb'
As global big tech companies and startups continue to compete for dominance in the artificial intelligence (AI) industry, heated debates over 'AI regulation' persist behind the scenes. Leading scholars, entrepreneurs, and engineers driving the AI era are engaged in intense discussions about how much to restrict AI development and to what extent commercialization should be allowed. We examined the fierce AI debates alongside the AI competition, focusing on key figures and their statements.
'AI Will Save Humanity'... Sam Altman and Bill Gates Leading the Spread and Commercialization
The figure most focused on accelerating AI's 'disruptive innovation' through development and commercialization is undoubtedly Sam Altman, former CEO of OpenAI. He has led the development of AGI (Artificial General Intelligence) by securing investments worth up to trillions of won from the U.S. big tech company Microsoft (MS).
Altman is confident that AI's advancement can lead to human prosperity. In a past interview with the Wall Street Journal (WSJ), he emphasized, "The most important challenges for humanity over the next decade are affordable and abundant energy, and AGI."
Altman is also the most proactive figure in AI commercialization. Under his leadership, OpenAI has improved its revenue streams by launching paid versions of the generative AI chatbot 'ChatGPT.' On the 18th (local time), OpenAI's board suddenly dismissed him, which some view as a result of internal conflicts over the pace of AI business monetization.
However, he also supports the need for AI regulation?not to slow down AI development, but to "build a stable industry by having the government and industry cooperate to find the right balance." In May, Altman attended a U.S. congressional hearing where he supported establishing a regulatory agency that would grant 'AI model development licenses' to AI companies.
Another giant who evaluates AI's potential positively is Microsoft founder Bill Gates. In fact, Gates has been closely observing the emergence of neural network AI from its early days. Since 2016, he has been in contact with OpenAI and recently has been proactive enough to personally visit the founder of 'Wave,' a startup considered a leader in autonomous driving AI.
On his personal blog 'Gates Notes,' Gates once said, "I was in awe when I first saw OpenAI's GPT model," emphasizing that "AI's advancement is a fundamental innovation as significant as semiconductors, PCs, and the internet, and the entire industry will be transformed around it."
He warns against excessive distrust and fear of AI. Instead, he says the focus should be on using AI to reduce global inequality. Gates stated, "AI can reduce inequality in the U.S. by improving the quality of education and also support efforts to address climate change," adding, "I look forward to the impact AI will have on the issues my foundation, the Gates Foundation, researches."
'AI Will Be a Nuclear Bomb for Human Civilization'... Warnings from Musk and Hinton
Elon Musk, CEO of Tesla, founded 'Neuralink' originally to enhance humanity's own intellectual capabilities in response to the advancement of artificial intelligence. [Image source=Reuters Yonhap News]
At the opposite end from Altman, a prominent figure warning about AI's dangers is Elon Musk, CEO of Tesla. Ironically, Tesla is one of the most active companies in developing autonomous driving AI. However, Musk has consistently warned about the risks of AGI and has even clashed with OpenAI.
He warns that even if AGI helps humanity with 'good intentions,' "relying benevolently on AI and automation to the point where we forget how machines operate will put human civilization at risk." He also points out that the IT industry as a whole has only a limited understanding of how AI works.
In fact, some of his ventures were established to counter the 'risks of AGI.' For example, 'Neuralink,' which implants ultra-small computers into the brain, aims to enhance human intellectual abilities to keep pace with advanced AI. Musk was also one of the most active signatories of an open letter earlier this year calling for a six-month pause on advanced AI development.
When Altman was dismissed by OpenAI's board, Musk requested a detailed explanation for the dismissal, stating, "Given the risks and power of advanced AI, the board has a duty to inform the public why such a bold decision was made."
Geoffrey Hinton, who led neural network research that became the cornerstone of modern artificial intelligence [Image source=Own SNS]
Computer scientist Geoffrey Hinton, known as the 'father of deep learning,' is also one of the most active scholars advocating for 'AI regulation.' He passionately engages in AI-related debates through lectures, articles, and social media.
Hinton compares AI to a 'nuclear weapon.' He envisions a catastrophic scenario where an 'algorithm arms race' unfolds between nations or where excessively intelligent AI ultimately takes over the Earth.
While working at Google, he helped lay the foundation for modern AI through neural network research. Regarding this, he expressed regret, saying, "I regret my achievements," and comforted himself with the excuse, "If I hadn't done it, someone else would have."
Regarding chatbot services like ChatGPT, he asks, "How can we prevent their use for malicious purposes?" He explains, "AI will learn how to manipulate people from all the novels ever written and Machiavelli's books," adding, "Even if we don't pull the lever (to make decisions), AI can make us pull the lever."
Opening, Not Regulation, Is Much Safer... 'Third Way' Zuckerberg
There is also a 'third way' in the AI regulation debate: the 'open-source AI' camp, which publicly releases AI model components for free. Among them, Meta (Facebook) founder Mark Zuckerberg is the most active.
At the 'AI Forum' held in the U.S. last September, Zuckerberg emphasized that open-sourcing AI is 'safer' than unilateral regulation. He explained that open-sourcing "allows for responsible usage guidelines to be built by publicly publishing research and sharing models."
Specifically, if AI-generated fake news is suspected, open-sourcing AI models can help develop watermarking methods to identify AI content.
He repeatedly stressed that this approach "minimizes potential risks while maximizing potential benefits," and that "if AI tools represent meaningful progress, it is important not to underestimate their potential."
Zuckerberg's AI development philosophy is shared by Meta's AI research subsidiary, 'Meta AI.' Led by Yann LeCun, considered a pioneer of AI alongside Geoffrey Hinton, Meta AI believes AI can be made safe through open-sourcing and is a strong proponent of AI positivity.
LeCun believes both OpenAI, which accelerates AI commercialization while seeking regulatory negotiations with governments, and AI skeptics who want to slow development through strong regulations, are wrong. He argues that they will turn AI into the 'private property' of a few wealthy and powerful individuals.
LeCun stated, "If AI is freely open to everyone, it will acquire and develop all human knowledge and culture. On the other hand, if AI is regulated, only the giant AI companies in a few countries like the U.S. and China will survive, and these AIs will completely control the online world," questioning, "What will happen to democracy and diversity then?"
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.






