OpenAI Responds to ChatGPT Erotica Feature Backlash
OpenAI CEO Sam Altman has addressed public outrage following the company’s announcement that ChatGPT would begin generating adult-oriented content. The controversial decision to allow AI-generated erotica and relax certain mental health safeguards sparked immediate criticism from users concerned about potential social harm.
Key Takeaways
- OpenAI will permit ChatGPT to create adult-oriented content for users
- Company maintains mental health protections remain unchanged
- Sam Altman emphasizes balancing user freedom with safety measures
The announcement on Tuesday revealed OpenAI’s plan to enable erotica creation for adult users while modifying some protective measures. Critics argued this move contradicted the company’s mission to improve society and could lead to negative social consequences.
Altman expressed surprise at the intense public reaction, stating he hadn’t anticipated such significant interest in the policy change. He clarified that OpenAI isn’t relaxing mental health policies and continues to prioritize safety.
The CEO explained the adjustments aim to provide “more user freedom for adults” while maintaining appropriate boundaries, comparing the approach to content rating systems like R-rated movies.
“As AI becomes more important in people’s lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission,” Mr Altman said in a post on X, formerly known as Twitter. “It doesn’t apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not.“Without being paternalistic we will attempt to help users achieve their long-term goals.“But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
Altman emphasized that harmful content remains prohibited and users experiencing mental health crises will receive different treatment. The company aims to support user goals without adopting a paternalistic approach while acknowledging they aren’t the “elected moral police of the world.”



