본문 바로가기
bar_progress

Text Size

Close

[Global Focus] OpenAI Launches Full-Scale Profit Drive... Growing 'Fear' of Human Extinction

OpenAI Founded with Goal of Safe AI Development
High Development Costs Drive Profit Pursuit
Criticism Rises Over Attempt to Convert to For-Profit
Key 'Anti-Altman' Figures Resign One After Another
‘Nobel Physics Laureate’ Professor Hinton Also Opposes
Clause Banning Military Use Removed
‘Rival’ Musk Expected to Hinder Commercialization

A month ago, the news that OpenAI, the developer of ChatGPT, was partnering with the U.S. private defense contractor Anduril sparked considerable controversy in the artificial intelligence (AI) industry. OpenAI agreed to provide AI models to improve Anduril's Counter-Unmanned Aerial System (CUAS). Foreign media outlets such as Bloomberg rushed to analyze this as OpenAI entering the private defense industry. The problem is that OpenAI was founded in 2015 with the goal of developing safe AI. Although OpenAI emphasizes that its AI technology will still not be used for weapons development or to harm human lives, concerns are rising that this principle might be compromised as AI technology competition among big tech companies intensifies.

OpenAI's Conversion to a For-Profit Corporation... Does the Threat to Humanity from AI Increase?
[Global Focus] OpenAI Launches Full-Scale Profit Drive... Growing 'Fear' of Human Extinction

Adding fuel to the fire, OpenAI is currently attempting to convert into a for-profit corporation, raising suspicions that the purpose of safe AI use may be threatened. OpenAI established a for-profit subsidiary in 2019 to cover the high costs of AI development and changed its governance structure to be controlled by a nonprofit board of directors. However, as the AI field expanded, it is now restructuring into a corporation without the obligation to reinvest in public interest purposes. Sam Altman, the CEO, is the most aggressive among OpenAI’s co-founders in pursuing profitable businesses. Key figures who emphasized AI safety and formed an ‘anti-Altman front’ within OpenAI criticized the company for being too focused on making money and mostly resigned last year.


If OpenAI, which leads generative AI technology worldwide, restructures its governance into a for-profit corporation, there are concerns that AI technology without safety measures will be developed without sufficient deliberation and could ultimately harm humanity. Elon Musk, Tesla CEO and an early founding member of OpenAI, filed an injunction last November to block OpenAI’s conversion to a for-profit corporation, and Mark Zuckerberg, CEO of Meta Platforms, also joined the opposition against OpenAI’s for-profit conversion.


Geoffrey Hinton, a professor at the University of Toronto in Canada and known as the ‘godfather of AI’ who laid the foundation for AI machine learning and won the Nobel Prize in Physics last year, has also publicly opposed OpenAI’s conversion to a for-profit corporation. Hinton is an AI pessimist who predicts that humanity could become extinct within the next 30 years due to AI. In a statement released late last month, he said, “OpenAI was established as a nonprofit organization focused on safety and made various commitments related to safety. If OpenAI, which has maintained nonprofit status and received many tax benefits, tries to change everything due to discomfort, it will send a very negative message to other players in the AI industry.”

[Global Focus] OpenAI Launches Full-Scale Profit Drive... Growing 'Fear' of Human Extinction
The Reason for Maximizing Profitability Is Due to High Development Costs

OpenAI’s corporate restructuring is driven by the judgment that it is difficult to attract further investment under the current nonprofit parent company structure as AI technology development costs continue to rise. Moreover, OpenAI’s next-generation AI model GPT-5 development schedule is reportedly facing technical issues, causing massive expenses and indefinite delays. OpenAI began developing GPT-5 after releasing GPT-4 in March last year but encountered problems such as AI data training limits and personnel shortages. Industry estimates suggest that OpenAI may have spent about $500 million (approximately 720 billion KRW) on computing costs alone for large-scale AI training over six months for GPT-5 development.

Fewer AI Risk Warning Mechanisms Within OpenAI

Within OpenAI, there are fewer individuals or mechanisms to warn about the risks associated with AI development. Ilya Sutskever, former OpenAI co-founder who led the ousting of CEO Altman in November 2023, left OpenAI last May and founded a company aimed at ‘safe superintelligence.’


The ‘Superalignment’ team, created at OpenAI last July to focus on AI safety, was disbanded when Sutskever left. Jan Leike, co-leader of this team, indirectly criticized OpenAI on his X (formerly Twitter) account by announcing his resignation and stating, “AI safety has been overshadowed by successful products over the past few years.”

OpenAI Removed Clause Prohibiting Military Use... Controversy Continues

OpenAI is crossing red lines in safe AI development. Before partnering with U.S. defense contractor Anduril in December last year, OpenAI announced earlier last year that it would collaborate with the U.S. Department of Defense to develop cybersecurity tools. Additionally, it removed the clause in its service policy terms that prohibited the use of its AI for military and war applications.


If OpenAI proceeds with its conversion to a for-profit corporation, the possibility increases that it will develop and deploy AI models without safety measures to stay ahead in commercial competition. This could lead to major governments worldwide racing to produce AI military supplies. This is evident from cases of AI companies aggressively pursuing profitability. For example, in November last year, the Chinese government secretly developed military AI models using Meta Platforms’ AI model LLaMA. Shortly after, Meta Platforms announced it would provide its AI model LLaMA to U.S. defense agencies and related private companies.


Furthermore, the commercialization controversy surrounding OpenAI intensified after former OpenAI developer Suchir Balaji, who exposed unethical development practices at OpenAI, suddenly passed away last November. Although his death was ruled a suicide, his family claims foul play and has requested a reinvestigation by authorities. Elon Musk also fueled the controversy by posting on X that “it does not look like a suicide.”

Will Musk Put the Brakes?
[Global Focus] OpenAI Launches Full-Scale Profit Drive... Growing 'Fear' of Human Extinction

Some speculate that Musk, who has emerged as a key figure in the potential second Trump administration, might put the brakes on OpenAI’s commercialization. Musk was almost the only AI industry figure to support California’s AI regulation bill, known as ‘SB 1047,’ last year. This bill, which mandated AI developers to conduct safety testing on their large language models, was vetoed by California Governor Gavin Newsom and subsequently discarded.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top