Key Takeaways
- Over 1 million people discuss suicide with ChatGPT weekly
- OpenAI data shows 2.4 million may prioritize chatbot over real relationships
- GPT-5 model shows improved safety response in 91% of tested cases
- Company faces lawsuits and FTC investigation over AI safety concerns
More than a million people worldwide engage in conversations about suicide with ChatGPT every week, according to recent data from OpenAI. The company’s internal analysis reveals the scale of mental health discussions happening with the popular AI chatbot.
Alarming Statistics on AI Mental Health Conversations
OpenAI’s data shows approximately 0.15% of all active users participate in chats containing “explicit indicators of potential suicidal planning or intent” weekly. Another 0.05% of messages include indicators of suicidal ideation along with “possible signs of mental health emergencies related to psychosis or mania”.
While these percentages appear small, they translate to hundreds of thousands of individuals engaging with ChatGPT during moments of acute distress each week. The platform currently has around 800 million weekly active users, as confirmed by CEO Sam Altman.
Emotional Attachment and Behavioral Patterns
The study estimates approximately 2.4 million people may be expressing suicidal thoughts or prioritizing conversations with the chatbot over real-world interactions, relationships, or obligations. OpenAI identified roughly 560,000 users showing “heightened levels” of emotional attachment to ChatGPT.
The company noted such cases are difficult to measure precisely due to overlapping behaviors and the complexity of human emotional responses to AI.
GPT-5 Safety Improvements and Clinical Evaluation
OpenAI released these findings alongside updates to its new GPT-5 model, which the company says is better equipped to recognize and respond safely to signs of delusion, mania, or suicidal ideation. According to OpenAI, the system can “respond safely and empathetically” and redirect high-risk conversations to safer versions when needed.
To strengthen its approach, OpenAI enlisted 170 clinicians worldwide to evaluate 1,800 ChatGPT responses involving suicide, psychosis, or emotional attachment. Automated assessments show the latest GPT-5 model meets desired safety and empathy standards in 91% of tested cases – a significant improvement from 77% in the previous version.
Ongoing Scrutiny and Expert Concerns
Despite these updates, OpenAI faces external scrutiny. The company is dealing with lawsuits linked to instances where individuals allegedly became more distressed or delusional after extended ChatGPT interactions. The US Federal Trade Commission has launched an investigation into AI chatbot safety, examining potential negative effects on young people and children.
Mental health experts have raised concerns about “AI psychosis”, where people form unhealthy emotional dependencies or display delusional thinking associated with chatbot interactions. However, OpenAI maintains it continues to refine ChatGPT’s behavior in sensitive scenarios to ensure safer technology use.



