Internal divisions continue at OpenAI, the developer of ChatGPT, even after the failed attempt to remove CEO Sam Altman.
According to major foreign media on the 30th (local time), the conflict within OpenAI between factions pushing for rapid commercialization of artificial intelligence (AI) and those emphasizing safety has persisted for six months since the attempt to oust CEO Sam Altman failed.
While Altman is charging ahead with product commercialization, the opposing side expresses concerns that uncontrollable superintelligence could be created.
Major foreign media reported that the conflict became apparent externally as senior executives have recently left the company one after another.
A representative case is the resignation of Ilya Sutskever, co-founder of OpenAI. Foreign media noted that although Sutskever stated he is confident that OpenAI will develop safe and beneficial artificial general intelligence (AGI) under the current leadership including Altman, internal tensions that triggered the coup attempt appear unresolved.
Sutskever, along with Helen Toner, a researcher at Georgetown University's Center for Security and Emerging Technology and former OpenAI board member, led the effort to remove Altman in November last year. A few days later, Altman returned, but Toner and Sutskever were removed from the board.
Jan Leike, an executive of the Superalignment team researching how to control superintelligent AI to behave in ways that are not harmful to humans, recently moved to Anthropic, a competitor.
Gretchen Krueger, an AI policy researcher, resigned this month and publicly expressed concerns about the company's decision-making processes on X (formerly Twitter).
Toner appeared on the podcast 'TED AI Show' this week and said, "Altman made it difficult for the board to do its job by hiding information for years, distorting what was happening in the company, and in some cases blatantly lying."
Anna Makanju, Vice President of OpenAI's Global Division, said that as employee resignations continued recently, policymakers have contacted the company to find out whether OpenAI is seriously considering safety. She stated, "Safety is the responsibility of multiple teams across OpenAI," and added, "AI is likely to be more transformative in the future, and there will be significant differences of opinion on how to regulate it."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


