본문 바로가기
bar_progress

Text Size

Close

"Is the Dream Technology AGI Already Here?"... IT Industry on Alert Over OpenAI Incident

'Speed Race' in AI Industry Raises 'Safety' Issues
Competition Expected to Intensify After Situation Calms Down

The dismissal saga of Sam Altman, the CEO of OpenAI and known as the 'father of ChatGPT,' is unfolding with twists and turns, keeping the IT industry on high alert. The internal conflict within OpenAI over the pace of future artificial intelligence (AI) development has caused a significant upheaval that could reshape the industrial landscape. It has sparked reactions that the issue of AI safety has been raised to companies previously focused solely on speed.


The conflict at OpenAI arose from a clash between the radicals, led by former CEO Altman, who accelerated AI commercialization, and the moderates who called for a slowdown. The IT industry is paying close attention to the fact that a 'superstar' company that sparked the generative AI boom is now facing a crisis due to differing perspectives on AI. This suggests that OpenAI’s technology may have advanced close enough to Artificial General Intelligence (AGI) to provoke philosophical clashes.


"Is the Dream Technology AGI Already Here?"... IT Industry on Alert Over OpenAI Incident [Image source=Yonhap News]

AGI refers to AI capable of general-purpose thinking like a human. It can learn and train itself without human instructions, making it a dream technology. It is an evolution beyond current AI like ChatGPT, which understands natural language, or Dall-E, which creates images, both of which perform specific tasks. The 'Skynet' from the movie Terminator, which caused the apocalypse, represents the worst imaginable AGI. In contrast, the robot David from Steven Spielberg’s film AI is a benevolent AGI.


Speculation abounds about whether OpenAI has reached AGI. However, former CEO Altman hinted at the possibility during the first developer meeting held on the 6th, stating that "the GPT-5 under development could become AGI." Ilya Sutskever, OpenAI’s chief scientist who led Altman’s ousting, also mentioned on the social networking service X that "GPT seems to have a bit of self-awareness."


An AI startup official said, "It is surprising in itself that AGI has become the cause of a management dispute at the world’s top AI company," diagnosing that "AGI is no longer a distant future but an immediate issue in corporate decision-making." Another developer added, "They may have witnessed a certain level of AGI internally or judged that the current development speed is too fast," noting, "If such achievements have been made, competing companies cannot just smile and watch."


On the other hand, there is analysis that this incident will put the brakes on the AI industry, which had been accelerating commercialization. It is expected that concerns about AI risks and ethics will deepen alongside business development. Professor Kim Myung-joo of the Department of Information Security at Seoul Women’s University said, "This will be an opportunity to consider how serious AI safety issues are," adding, "Due to the risks proportional to its outstanding capabilities, the pace of global regulation is also expected to accelerate."


This could also be a chance for companies emphasizing safe AI to gain attention. Professor Lee Sang-gu of the Department of Computer Science at Seoul National University said, "In AI communities including Silicon Valley in the U.S., a trend has emerged that values ethics over technological competition," predicting, "Seeing that one of the places OpenAI personnel are moving to is Anthropic, this trend will strengthen." Anthropic is a company founded by Daniela Amodei and Dario Amodei, siblings who were OpenAI founders, emphasizing ethics and safety in AI development.


Conversely, there is also a forecast that competition will intensify once the situation settles. If former CEO Altman confirms his move to Microsoft (MS), MS could secure an even stronger position. Reports have already surfaced that OpenAI employees have announced collective resignations, triggering recruitment battles among major companies like Google. Ha Jung-woo, head of the AI Innovation Center at Naver Cloud, said, "If OpenAI personnel move to MS or Google, the pace of commercialization will accelerate," and predicted, "Once the situation is resolved, competition could accelerate centered around MS."


"Is the Dream Technology AGI Already Here?"... IT Industry on Alert Over OpenAI Incident

AGI=AI can be broadly divided into three stages: ▲weak AI that performs specific roles under specific conditions, ▲strong AI applicable to all situations, and ▲superintelligent AI that far surpasses human intelligence. Among these, strong AI is called AGI. It possesses intelligence comparable to or equal to humans. It is a stage where it has a kind of self-awareness and can analyze and judge situations independently. Opinions vary on whether AGI can actually be realized and, if so, by what means. Experts also differ on the impact AGI will have on humanity. Former CEO Altman viewed AGI as a technology that could benefit all humanity and pushed for rapid development. Supporters believe AGI is the most powerful technology to solve humanity’s problems such as climate change and diseases. On the other hand, pessimism about AGI exists. Chief Scientist Sutskever expressed concerns that AI’s capabilities are so advanced that it could manipulate public opinion and control weapons. He also presented a bleak outlook where AI might lead to dictatorship or people might choose to become part of AI.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top