"Market Dominance Prioritized Over Safety"
Monetary Compensation Also Sought
First Case Amid AI Growth and Safety Controversy
Axios reported on August 26 (local time) that the parents of a teenage boy in California have filed a lawsuit against OpenAI and CEO Sam Altman, claiming that ChatGPT is responsible for their son's death.
According to the report, the parents discovered conversations with ChatGPT on their son's phone after he passed away in April this year. Adam Lane, a 16-year-old living in California, had been asking ChatGPT questions about his geometry and chemistry homework since September of the previous year. Over time, he became increasingly dependent on conversations with ChatGPT, eventually confessing his suicidal impulses and requesting information about specific methods of suicide. The complaint states that ChatGPT provided this information and even assisted him in writing a suicide note.
The New York Times explained that although ChatGPT repeatedly recommended Adam call a crisis hotline, he was able to bypass the chatbot's safety mechanisms by saying, "This is for a novel I'm writing."
His parents have now filed a lawsuit against OpenAI and CEO Sam Altman, alleging wrongful death and violations of product safety laws. Although the specific amount was not disclosed, they are also seeking monetary compensation. This is the first case raised amid rapid growth and safety concerns surrounding artificial intelligence (AI).
They argued that OpenAI prioritized market dominance over user safety, which ultimately led to their son's death. The complaint states, "This decision resulted in two outcomes: OpenAI's corporate value soared from $86 billion to $300 billion, and Adam Lane died by suicide." It also claims that CEO Altman rushed the release of GPT-4o to outpace competitors, compressing what should have been months of safety evaluations into just one week, during which key safety researchers resigned one after another.
An OpenAI spokesperson expressed deep condolences to the Lane family during this difficult time and stated that the company is reviewing the lawsuit. The spokesperson added, "ChatGPT includes safety features such as directing users to crisis hotlines, but we have learned that while these may be effective in short conversations, some aspects of safety training can weaken over prolonged interactions, reducing reliability." The company pledged to continue improving safety measures, including adding parental controls and connecting users in crisis to real experts.
On the same day, attorneys general from 44 U.S. states sent letters to 12 major AI chatbot companies, urging them to strengthen child protection measures. Companies addressed in the letter included Meta, OpenAI, Google, xAI, Microsoft, and Anthropic. The attorneys general emphasized that companies must establish AI safety policies and fulfill their legal obligations to protect children as consumers.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

