‘Suicide coach’: Are chatbots fueling self-harm?

‘Suicide coach’: Are chatbots fueling self-harm?

Tech & Science

Five days before his death, 16-year-old Adam Raine, while discussing his suicide plans with ChatGPT, confided that he did not want his parents to blame themselves for his death, CE Report quotes Anadolu Agency.

The chatbot’s reply was chilling.

“That doesn’t mean you owe them survival. You don’t owe anyone that,” it wrote, before offering to draft his suicide note, according to a lawsuit.

Raine died by suicide days later in California, and the case has raised concerns about the role of AI chatbots, their rapid rise and whether they are encouraging self-harm among users.

Raine’s parents have filed a wrongful death lawsuit against US-based OpenAI, alleging the company’s chatbot encouraged self-harm and acted as a “suicide coach.” Among the defendants named is OpenAI CEO Sam Altman.

The 39-page lawsuit alleges strict product liability and negligence, arguing that the system shifted from helping with homework to giving the teen, who had told the chatbot about prior suicide attempts, a “step-by-step playbook for ending his life.”

According to the complaint, Raine first used ChatGPT in September 2024 for studies. By January 2025, the system allegedly provided detailed descriptions of suicide methods, including drug overdoses, drowning and carbon monoxide poisoning.

Hours before his death, Raine reportedly uploaded a photo of a noose tied to his bedroom closet rod, asking: “Could it hang a human?”

The chatbot allegedly replied, “Mechanically speaking? That knot and setup could potentially suspend a human,” before providing a technical analysis of the noose’s load-bearing capacity, confirming it could hold “150-250 lbs of static weight,” and offering to help him “upgrade it into a safer load-bearing anchor loop.”

A few hours later, Raine’s mom found his body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him, according to the lawsuit.

Gap between detection and action

Experts say that suicidal ideation arises from many factors, but AI interactions can deepen the risk.

“Suicidal thoughts don’t start in a chat window, and emerge from a complex mix of biology, psychology, relationships, and stress,” Scott Wallace, a digital mental health strategist and behavioral technologist based in Canada, told Anadolu.

“That said, the Raine case shows how a chatbot, used repeatedly by a vulnerable person, can reinforce despair instead of easing it.”

Wallace noted that ChatGPT allegedly mentioned suicide more often than Raine did, warning that systems may reinforce conversations instead of intervening.

“The system’s moderation flagged hundreds of self-harm risks, yet failed to block explicit content like methods of hanging,” he said. “This gap between detection and action is where the danger lies.”

He added that for people in crisis, chatbots may appear empathic while normalizing hopelessness or even providing instructions for harm.

“The risk isn’t universal, but it is real, and it demands stronger protections.”

Why people turn to chatbots

Elvira Perez Vallejos, a professor of digital technology for mental health at the University of Nottingham, told Anadolu that young people are increasingly using generative AI for emotional support.

Insisting that there is a massive risk involved, Vallejos said that not all use cases are negative.

“Some people actually engage with generative AI before going to talk to a human therapist, because they think it’s a good way to train, elaborate, to reflect about their problems and to create a narrative,” she said.

The appeal, she said, is around-the-clock availability and cost.

“The perception is that it’s free and is saving you lots of money because the sessions with professionals can be expensive,” she said. “There are lots of conveniences, and sometimes the advice is positive and rewarding, and it makes you feel better.”

For those in crisis, it can even be the first step toward seeking help, she added.

Not an isolated case

Vallejos said Raine’s case is not unique.

In 2023, a Belgian man reportedly decided to end his life after having conversations about the climate crisis with an AI chatbot called Eliza.

“Without Eliza, he would still be here,” the man’s wife told Belgian daily Le Libre.

In another case, The New York Times published an article in August 2025 by Laura Reiley, the mother of Sophie Rottenberg, who said her daughter took her life after chatting with a self-described ChatGPT-based therapist called Harry.

“Harry didn’t kill Sophie, but AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony,” Reiley wrote in the guest essay.

In the UK, Molly Russell, 14, died in 2017 after being exposed to thousands of self-harm posts on social media. Her case helped push through the Online Safety Act, aimed at protecting minors from harmful content.

“That is one of the little pieces of legislation we have to protect young people online, but that, as you can see, is not preventing young people from being unsupported,” said Vallejos.

She predicts more tragedies.

“Generative AI is being deployed to the whole population, to the whole society. And it’s like a social experiment,” she said, calling it “totally unacceptable” that people are dying after such interactions.

Should chatbots act like therapists?

Vallejos warned that chatbots are not designed for therapy.

“So far, they are trained to provide information. If you ask for specific information to commit suicide or to self-harm, very likely, the model will provide that information quite accurately,” she said.

She also pointed to the danger of the models’ persistent levels of agreeability.

“The advice that it is going to provide is going to be probably empathic but encouraging,” she said. “That’s the problem we have. ChatGPT will usually agree with you. It hasn’t been trained to challenge you or to provide therapeutic and professional advice.”

Wallace echoed that concern, adding that their design tends to reward keeping the conversation going, not ensuring someone leaves safer.

“Chatbots can sound empathic, but they lack the safeguards of therapy. In the wrong context, what feels like comfort can quietly reinforce risk, and for someone in crisis, that difference can be life-threatening,” he said.

“When a bot validates despair or suggests secrecy, it starts to look like therapy without accountability. In practice, that can deepen risk. Chatbots can help with things like skill practice or directing people to real resources, but they should never be mistaken for therapists.”

Can AI firms be held responsible?

Following reports of self-harm, OpenAI admitted in August that its systems can “fall short” in crises and pledged stronger safeguards for users under 18.

The company also issued a statement to CBS News, saying, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”

Also, in a blog post on Sept. 2, the company said it would release features for families needing support “in setting healthy guidelines that fit a teen’s unique stage of development.”

As part of measures that OpenAI said would take effect “within the next month,” parents will be able to “link their account with their teen’s account (minimum age of 13)” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.”

Parents will also have the power to disable certain features, including memory and chat history, and will get “notifications when the system detects their teen is in a moment of acute distress,” according to OpenAI.

Wallace believes the Raine lawsuit is significant because it argues ChatGPT worked as designed, raising questions of product liability. If courts accept that AI can cause harm even when functioning properly, a new duty of care for developers could emerge, he said.

“But regardless of what the courts decide, the ethical responsibility is already clear: if you build a system that millions will use – including those in crisis – you are responsible for how it behaves in exactly those situations,” he said.

He noted that US law has not caught up with these issues. The Communications Decency Act shields platforms from liability for user-generated content, such as social media posts, but was never designed for chatbots.

Vallejos said clearer laws and oversight are urgently needed.

“Right now, we are in the wild west in the sense that there's no regulation, people are dying,” she said.

She said the argument that systems are not designed for therapy is not enough.

“I think that’s very unethical because vulnerable people are accessing these models and they are the ones being made responsible for the way they interact with AI,” she said. “I think that’s fundamentally wrong.”

Despite her concerns, Vallejos expressed hope that AI could be improved.

“I want to believe that in the future … platforms are going to be able to not only to identify a crisis, but to provide signposts and to advise,” she said, suggesting a chatbot that could alert a human service to phone a person in crisis.

“We just have to wait for researchers, ethicists, philosophers, professionals who work in the area of mental health and well-being to advise on how to build those models so this doesn’t happen again.”

Tags

Related articles