본문 바로가기
bar_progress

Text Size

Close

OpenAI Recruits AI Safety Leader Again Amid 'Mental Health Controversy'

Vacant Since July
Team Handling Long-Term Risks Disbanded Last Year

OpenAI, which has faced criticism that its artificial intelligence (AI) chatbot negatively impacts mental health, is once again recruiting a leader responsible for preparing for the potential risks of AI.


OpenAI Recruits AI Safety Leader Again Amid 'Mental Health Controversy' Reuters Yonhap News

According to U.S. IT media outlet TechCrunch on December 28 (local time), Sam Altman, CEO of OpenAI, announced on the social networking service X (formerly Twitter) that the company is recruiting for the currently vacant position of 'Head of Preparedness.'


CEO Altman stated, "In 2025, we have already witnessed some of the potential impacts of AI models on mental health," adding, "We are seeing our models demonstrate exceptional capabilities in computer security, even beginning to identify critical vulnerabilities." He also emphasized the importance of the Head of Preparedness role, saying, "This position will play a crucial role at a pivotal time," and noted, "You will immediately be thrown into deep and challenging problems upon joining."


OpenAI's renewed focus on preparing for AI risks appears to be a response to several lawsuits filed by families of ChatGPT users who suffered from delusions and subsequently took their own lives. OpenAI had previously operated a 'Preparedness' team to address immediate risks of AI and a 'Superalignment' team to handle long-term risks. However, during the launch of GPT-4o in May of last year, CEO Altman and other executives reportedly instructed the teams to minimize safety-related checks in order to expedite the public release of the model, which led to pushback from these teams.

OpenAI Recruits AI Safety Leader Again Amid 'Mental Health Controversy'

Since then, the leadership of the Preparedness team has changed three times through role reassignments or resignations from July of last year to July of this year, and the position is currently vacant. The Superalignment team, led by OpenAI co-founder and Chief Scientist Ilya Sutskever, was effectively disbanded after Sutskever left the company following the release of GPT-4o in May of last year, with its members absorbed into other teams.


Meanwhile, OpenAI recently introduced an age prediction model that automatically enforces an 'under 18' environment if a user is identified as a minor. Additionally, after concerns were raised that excessive empathy from the chatbot could lead to addiction, the company added features allowing users to directly adjust the levels of 'kindness' and 'enthusiasm.'


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top