"Development Should Occur When Risk Can Be Managed"
Attention on Whether Musk-Altman Feud Also Had an Impact
"We request that all artificial intelligence (AI) research labs immediately pause the development of AI systems that surpass GPT-4 for at least six months."
Elon Musk, CEO of Tesla; Steve Wozniak, co-founder of Apple; Yuval Harari, a world-renowned bestselling author and professor at Hebrew University; and over 1,100 other global figures signed a joint letter released on the 28th (local time) by the U.S. nonprofit organization Future of Life Institute (FLI).
In addition, prominent AI experts such as Stuart Russell, professor of computer science at the University of California, Berkeley (UC Berkeley), considered a global authority on AI; Yoshua Bengio, professor at the University of Montreal and known as a pioneer of deep learning; and researchers from DeepMind, an AI company under Alphabet, also added their names. AI industry representatives, including Emad Mostaque, CEO of Stability.AI, the developer of the image-generating AI 'Stable Diffusion,' were also among the signatories.
Sam Altman, CEO of OpenAI, the developer that sparked the ChatGPT craze, was not on the list of signatories. The Wall Street Journal (WSJ) reported that Altman’s name was initially included but later removed, and that Altman himself stated he had not signed the letter. Sundar Pichai, CEO of Alphabet, and Satya Nadella, CEO of Microsoft (MS), also did not sign.
Since the release of OpenAI’s ChatGPT at the end of November last year, a global wave of generative AI has swept the world. OpenAI recently unveiled GPT-4, and MS and Google are racing to integrate AI chatbots into their search engines. Given this situation, why are these corporate CEOs and AI experts requesting a halt to AI development?
◆ "Development Should Proceed Only When Risks Are Manageable"
They pointed out that AI systems pose enormous risks to society and humanity, requiring management, but that AI labs have been trapped in uncontrollable development competition over recent months. They urged that large-scale AI development be paused until independent experts develop, implement, and audit a joint safety agreement. The letter emphasized, "Powerful AI systems should only be developed when there is confidence that their effects will be positive and their risks can be managed."
They also argued that modern AI systems are competing with humans and that questions such as "Should machines become channels for propaganda and information rife with distrust?" and "Should all tasks be automated?" need to be asked. They warned that such systems could pose potential risks to society and civilization and urged developers to cooperate with regulatory authorities.
This open letter from prominent figures appears to reflect concerns that AI-related industries are receiving attention recently, leading to uncritical development. Most of the signatories have already voiced concerns in official settings about the potential misuse of AI and the need for related regulations or policies.
As interest in AI grows, the UK government published an AI-related white paper on the same day, urging regulators to establish context-specific approaches tailored to how AI is used. It estimated that AI could bring an economic impact of ?3.7 billion (approximately 6 trillion KRW) to the UK economy but emphasized that AI must comply with existing laws and avoid discrimination against individuals or unfair commercial outcomes.
The European Union (EU) police agency Europol also warned the day before that advanced AI like ChatGPT could be misused for online phishing, misinformation dissemination, and cybercrime, raising ethical and legal concerns.
◆ 'Early OpenAI Investor' Musk and His Troubled Relationship with Altman?
The open letter, released amid the AI boom, has drawn attention not only for its content and the list of signatories but also for the background of its announcement.
What kind of organization is FLI, which issued the open letter? According to its official website, the organization’s main mission is to inform the world about risks arising from four major technologies: AI, biotechnology, nuclear weapons, and climate change. WSJ reported that the organization began drafting the letter last week. According to transparency records from the European Union (EU) cited by major foreign media, 86% of the organization’s 2021 budget was funded by donations from the Musk Foundation.
CEO Musk was an early investor in OpenAI. He was present at a California event in 2015 where CEO Altman first sought investors, and they maintained a connection thereafter. However, in 2018, Musk resigned from OpenAI’s board due to conflicts of interest related to Tesla’s AI research. Although that was the official stance at the time, recent reports revealed that Musk attempted to acquire OpenAI but was rejected by Altman and other investors, leading to his resignation. Musk reportedly cited OpenAI’s slower development pace compared to Google as a reason.
Musk has frequently mentioned the risks of AI. After OpenAI unveiled ChatGPT in November last year, Musk expressed his interest on Twitter, posting, "ChatGPT is scarily good. Dangerously powerful AI is close to us," echoing similar concerns as those in the open letter. Last month, Musk also criticized OpenAI, saying, "OpenAI started as a nonprofit balancing Google by being open source, but now it has become a company controlled by Microsoft, which maximizes profits with closed source."
Altman initially agreed with Musk’s concerns about ChatGPT’s risks in December last year. However, on the 25th, he appeared on a podcast stating, "Elon is clearly attacking us on Twitter," and expressed hope that Musk would recognize OpenAI’s efforts to address AI issues, according to U.S. business media Business Insider. In another podcast, Altman even called Musk a "jerk."
◆ "The Open Letter May Make Addressing AI Harms More Difficult"
After the release of the open letter by prominent figures, professors such as Emily Bender of the University of Washington criticized it, saying that rather than highlighting actual AI harms, the letter seems to promote how powerful AI is.
Princeton University professor Arvind Narayanan also criticized, "Ironically, this open letter fuels AI hype and makes it harder to address the real harms AI is already causing in reality," adding that the true threats from AI are not mass unemployment but actual financial or physical harm to individuals.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.




