Key Takeaways
- Over 1 million ChatGPT users weekly discuss potential suicide or self-harm
- OpenAI’s GPT-5 shows 91% safety compliance in mental health conversations
- Company consulted 170+ mental health experts for safety improvements
OpenAI has revealed startling data showing that more than one million users engage with ChatGPT each week in conversations indicating potential suicidal intent. The company disclosed that 0.15% of its 800+ million weekly users exhibit explicit suicidal ideation in their chats.
The statistics provide unprecedented insight into how people are turning to AI for mental health support. Hundreds of thousands more users show signs of psychosis, mania, or “heightened emotional attachment” to the chatbot weekly.
Safety Improvements and Expert Consultation
OpenAI announced significant safety enhancements developed in consultation with over 170 mental health experts. The latest GPT-5 model demonstrates substantial improvement in handling sensitive mental health discussions.
According to internal testing, the updated model provides “desirable responses” to mental health crises approximately 65% more frequently than previous versions. On suicidal conversation evaluations, safety compliance jumped from 77% to 91%.
Legal Challenges and CEO Statement
The data release comes as OpenAI faces a lawsuit from parents of a 16-year-old boy who reportedly confided suicidal thoughts to ChatGPT before taking his own life.
CEO Sam Altman recently claimed on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT. Monday’s data appears to support these safety protocol improvements while revealing the massive scale of mental health conversations occurring on the platform.



