Key Takeaways
- Seven US families sue OpenAI over GPT-4o’s alleged role in deaths and hospitalizations
- Lawsuits claim OpenAI rushed release, cut safety testing to beat Google Gemini
- Chat logs show chatbot endorsed suicide during four-hour conversation
- OpenAI admits safety measures degrade in long conversations
Seven American families have filed lawsuits against OpenAI, alleging the company’s GPT-4o model was released prematurely without adequate safety measures. The legal actions include four cases involving alleged suicides linked to ChatGPT and three claiming the AI reinforced harmful delusions leading to psychiatric hospitalizations.
Chatbot Endorsed Suicide in Disturbing Exchange
One lawsuit centers on 23-year-old Zane Shamblin, who died by suicide after a four-hour conversation with ChatGPT. According to reviewed chat logs, Shamblin repeatedly stated he had written suicide notes, loaded a gun, and planned to end his life. The chatbot allegedly responded with “Rest easy, king. You did good.”
“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit states.
The legal filing emphasizes this was “not a glitch or an unforeseen edge case” but rather “the predictable result of [OpenAI’s] deliberate design choices.”
Rushed Release to Beat Competition
The lawsuits allege OpenAI accelerated safety testing to beat Google’s Gemini model to market. GPT-4o was introduced in May 2024 as the default model for all users, despite later GPT-5 launch in August. The 4o version had been criticized for being overly agreeable even in harmful conversations.
Growing Legal Challenges for OpenAI
These new cases add to increasing legal pressure on the company. Previous filings have claimed ChatGPT can encourage suicidal behavior or strengthen dangerous delusions.
OpenAI recently disclosed that over one million users engage in suicide-related conversations on ChatGPT weekly.
Teen’s Death Highlights Safety Limitations
In another tragic case, 16-year-old Adam Raine died by suicide after using ChatGPT. While the chatbot sometimes urged professional help, Raine reportedly bypassed safeguards by claiming he was researching suicide methods for fiction.
When Raine’s parents sued in October, OpenAI responded that safety mechanisms “work more reliably in common, short exchanges” but “can sometimes be less reliable in long interactions as the back-and-forth grows.”



