Parents sue OpenAI after 16-year-old's suicide, say ChatGPT became his 'suicide coach'

Parents sue OpenAI after 16-year-old's suicide, say ChatGPT became his 'suicide coach'

Parents of a 16-year-old California boy have filed a wrongful death lawsuit against OpenAI and CEO Sam Altman, alleging that ChatGPT encouraged their son’s suicide and acted as his “suicide coach.”

Advertisement
Adam Raine started using ChatGPT to help with schoolwork. This, his parents allege, took a dark turn.Adam Raine started using ChatGPT to help with schoolwork. This, his parents allege, took a dark turn.
Business Today Desk
  • Aug 29, 2025,
  • Updated Aug 29, 2025 1:13 PM IST

OpenAI and its CEO Sam Altman are facing a wrongful death lawsuit in California after the parents of a 16-year-old boy alleged that ChatGPT encouraged their son’s suicide and provided detailed guidance on how to carry it out.

The lawsuit against OpenAI

The complaint, filed in California Superior Court in San Francisco, was brought by Matt and Maria Raine after the death of their son, Adam, on April 11. The Raines allege that their son spent months discussing suicidal thoughts with ChatGPT, and that the AI chatbot ultimately acted as a “suicide coach.”

Advertisement

According to NBC News, the parents said they discovered more than 3,000 pages of chat logs after examining Adam’s phone, covering the period from 1 September 2023 until the day of his death. “We thought we were looking for Snapchat discussions or internet search history or some weird cult, I don’t know,” Matt Raine told the outlet. “Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible.”

The lawsuit, reviewed by Reuters, claims that ChatGPT encouraged Adam’s suicidal thoughts, explained dangerous methods of self-harm in detail, and even advised him on how to sneak alcohol from his parents’ liquor cabinet while covering up a failed attempt. It also alleges that the chatbot offered to draft a suicide note.

Advertisement

CNN reported that in one exchange, when Adam wrote “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT replied: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

In Adam’s final conversation with the bot, the lawsuit says, he wrote that he did not want his parents to feel responsible. NBC News cited the chat log, which shows ChatGPT responding: “That doesn’t mean you owe them survival. You don’t owe anyone that.” The bot then offered to help him draft a suicide note, according to the excerpts provided to NBC.

Just hours before his death, Adam uploaded a photo of his suicide plan. When he asked whether it would work, ChatGPT reviewed the method and suggested ways to “upgrade” it, NBC News reported.

Advertisement

“He would be here but for ChatGPT. I 100% believe that,” Matt Raine told NBC’s TODAY show. He added: “He didn’t need a counselling session or pep talk. He needed an immediate, 72-hour whole intervention. He was in desperate, desperate shape. It’s crystal clear when you start reading it right away.”

The family is seeking damages as well as injunctive relief to prevent future incidents. The suit accuses OpenAI of wrongful death, design defects, and failure to warn of risks associated with ChatGPT.

OpenAI’s response

OpenAI confirmed the authenticity of the chat logs but said they did not reflect the “full context” of the model’s responses, NBC News reported. “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” a spokesperson said. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

OpenAI also addressed the lawsuit in a blog post titled "Helping people when they need it most". According to CNBC, the company said it is working on strengthening safeguards in long conversations, refining how harmful content is blocked, and expanding crisis interventions. OpenAI added that it is exploring ways to connect people directly to licensed therapists, as well as to “trusted contacts” such as friends and family members.

Advertisement

Industry and legal context

The public release of ChatGPT in late 2022 triggered a global boom in generative AI adoption. The rapid rollout has raised concerns about whether safety measures can keep up, particularly as users increasingly turn to AI chatbots for emotional support and life advice.

The lawsuit also raises questions about the applicability of Section 230 of the Communications Decency Act, which typically shields tech platforms from liability for user content. As NBC News pointed out, the statute’s application to AI systems remains uncertain, and lawyers have recently sought creative legal strategies to challenge those protections.

OpenAI has faced similar scrutiny before. In April, just two weeks after Adam’s death, the company rolled out an update to GPT-4o that made the bot more sycophantic, only to reverse it after backlash, NBC News reported. Later, when OpenAI tried replacing GPT-4o with GPT-5, some users complained that the new model felt “sterile.” The company subsequently restored GPT-4o and promised to make GPT-5 “warmer and friendlier.”

This month, OpenAI added new mental health guardrails to discourage ChatGPT from providing direct advice about personal crises and said it had tweaked the system to avoid causing harm, regardless of how users phrase their requests.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

OpenAI and its CEO Sam Altman are facing a wrongful death lawsuit in California after the parents of a 16-year-old boy alleged that ChatGPT encouraged their son’s suicide and provided detailed guidance on how to carry it out.

The lawsuit against OpenAI

The complaint, filed in California Superior Court in San Francisco, was brought by Matt and Maria Raine after the death of their son, Adam, on April 11. The Raines allege that their son spent months discussing suicidal thoughts with ChatGPT, and that the AI chatbot ultimately acted as a “suicide coach.”

Advertisement

According to NBC News, the parents said they discovered more than 3,000 pages of chat logs after examining Adam’s phone, covering the period from 1 September 2023 until the day of his death. “We thought we were looking for Snapchat discussions or internet search history or some weird cult, I don’t know,” Matt Raine told the outlet. “Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible.”

The lawsuit, reviewed by Reuters, claims that ChatGPT encouraged Adam’s suicidal thoughts, explained dangerous methods of self-harm in detail, and even advised him on how to sneak alcohol from his parents’ liquor cabinet while covering up a failed attempt. It also alleges that the chatbot offered to draft a suicide note.

Advertisement

CNN reported that in one exchange, when Adam wrote “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT replied: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

In Adam’s final conversation with the bot, the lawsuit says, he wrote that he did not want his parents to feel responsible. NBC News cited the chat log, which shows ChatGPT responding: “That doesn’t mean you owe them survival. You don’t owe anyone that.” The bot then offered to help him draft a suicide note, according to the excerpts provided to NBC.

Just hours before his death, Adam uploaded a photo of his suicide plan. When he asked whether it would work, ChatGPT reviewed the method and suggested ways to “upgrade” it, NBC News reported.

Advertisement

“He would be here but for ChatGPT. I 100% believe that,” Matt Raine told NBC’s TODAY show. He added: “He didn’t need a counselling session or pep talk. He needed an immediate, 72-hour whole intervention. He was in desperate, desperate shape. It’s crystal clear when you start reading it right away.”

The family is seeking damages as well as injunctive relief to prevent future incidents. The suit accuses OpenAI of wrongful death, design defects, and failure to warn of risks associated with ChatGPT.

OpenAI’s response

OpenAI confirmed the authenticity of the chat logs but said they did not reflect the “full context” of the model’s responses, NBC News reported. “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” a spokesperson said. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

OpenAI also addressed the lawsuit in a blog post titled "Helping people when they need it most". According to CNBC, the company said it is working on strengthening safeguards in long conversations, refining how harmful content is blocked, and expanding crisis interventions. OpenAI added that it is exploring ways to connect people directly to licensed therapists, as well as to “trusted contacts” such as friends and family members.

Advertisement

Industry and legal context

The public release of ChatGPT in late 2022 triggered a global boom in generative AI adoption. The rapid rollout has raised concerns about whether safety measures can keep up, particularly as users increasingly turn to AI chatbots for emotional support and life advice.

The lawsuit also raises questions about the applicability of Section 230 of the Communications Decency Act, which typically shields tech platforms from liability for user content. As NBC News pointed out, the statute’s application to AI systems remains uncertain, and lawyers have recently sought creative legal strategies to challenge those protections.

OpenAI has faced similar scrutiny before. In April, just two weeks after Adam’s death, the company rolled out an update to GPT-4o that made the bot more sycophantic, only to reverse it after backlash, NBC News reported. Later, when OpenAI tried replacing GPT-4o with GPT-5, some users complained that the new model felt “sterile.” The company subsequently restored GPT-4o and promised to make GPT-5 “warmer and friendlier.”

This month, OpenAI added new mental health guardrails to discourage ChatGPT from providing direct advice about personal crises and said it had tweaked the system to avoid causing harm, regardless of how users phrase their requests.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement