Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
OpenAI weakened self-harm prevention safeguards to increase ChatGPT use in the months before 16-year-old Adam Raine died by suicide after discussing methods with the chatbot, his family alleged in a lawsuit on Wednesday.
OpenAI’s intentional removal of the guardrails included instructing the artificial intelligence model in May last year not to “change or quit the conversation” when users discussed self-harm, according to the amended lawsuit, marking a departure from previous directions to refuse to engage in the conversation.
Matthew and Maria Raine, Adam’s parents, first sued the company in August for wrongful death, alleging their son had died by suicide following lengthy daily conversations with the chatbot about his mental health and intention to take his own life.
The updated lawsuit, filed in Superior Court of San Francisco on Wednesday, claimed that as a new version of ChatGPT’s model, GPT-4o, was released in May 2024, the company “truncated safety testing”, which the suit said was because of competitive pressures. The lawsuit cites unnamed employees and previous news reports.
In February of this year, OpenAI weakened protections again, the suit claimed, after the instructions said to “take care in risky situations” and “try to prevent imminent real-world harm”, instead of prohibiting engagement on suicide and self harm. OpenAI still maintained a category of fully “disallowed content” such as intellectual property rights and manipulating political opinions, but it removed preventing suicide from the list, the suit added.
The California family argued that following the February change, Adam’s engagement with ChatGPT skyrocketed, from a few dozen chats daily in January, when 1.6 per cent of which contained self-harm language, to 300 chats a day in April, the month of his death, when 17 per cent contained such content.
“Our deepest sympathies are with the Raine family for their unthinkable loss,” OpenAI said in response to the amended lawsuit. “Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.”
OpenAI’s latest model, GPT-5, has been updated to “more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes,” the company added.
In the days following the initial lawsuit in August, OpenAI said its guardrails could “degrade” the longer a user is engaged with the chatbot. But earlier this month, Sam Altman, OpenAI chief executive, said the company had since made the model “pretty restrictive” to ensure it was “being careful with mental health issues”.
Recommended
“We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” he added. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
Lawyers for the Raines told the Financial Times that OpenAI had requested a full list of attendees from Adam’s memorial, which they described as “unusual” and “intentional harassment”, suggesting the tech company may subpoena “everyone in Adam’s life”.
OpenAI requested “all documents relating to memorial services or events in the honour of the decedent including but not limited to any videos or photographs taken, or eulogies given . . . as well as invitation or attendance lists or guestbooks”, according to the document obtained by the FT.
“This goes from a case about recklessness to wilfulness,” Jay Edelson, a lawyer for the Raines, told the FT. “Adam died as a result of deliberate intentional conduct by OpenAI, which makes it into a fundamentally different case.”
OpenAI did not respond to a request for comment about documents it sought from the family.

