Key Takeaways
- OpenAI will allow verified adults to access more content, including erotica, on ChatGPT
- CEO Sam Altman stated OpenAI is “not the elected moral police of the world”
- The company faces FTC investigations and a wrongful death lawsuit over AI safety concerns
- New parental controls and teen safety features are being implemented
OpenAI has relaxed its content restrictions on ChatGPT, permitting verified adults to access material including erotica. This policy shift comes amid growing regulatory scrutiny and public debate about AI safety.
CEO Sam Altman defended the decision, stating that OpenAI is “not the elected moral police of the world.” The announcement triggered significant backlash from advocacy groups concerned about mental health impacts and risks to minors.
Regulatory Pressure and Legal Challenges
The Federal Trade Commission launched an investigation in September examining potential negative effects of chatbots on children and teenagers. This follows a wrongful death lawsuit alleging ChatGPT played a role in a teenager’s suicide.
In response to these concerns, OpenAI has introduced enhanced parental controls and is developing an automatic system to apply age-appropriate settings for users under 18.
Safety Measures and Advisory Council
OpenAI has assembled an eight-member advisory council to provide guidance on how AI affects users’ motivation, emotions, and psychological wellbeing. The company faces opposition from organizations including the National Center on Sexual Exploitation.
Altman acknowledged the strong public reaction to his comments, explaining they “blew up” more than anticipated. He emphasized that new safety tools now enable OpenAI to “safely relax” most restrictions while mitigating “serious mental health issues.”
Content Policy Framework
In December, Altman confirmed OpenAI would “allow more content, including erotica, on ChatGPT” for “verified adults.” He clarified that while the company cares “very much about the principle of treating adult users like adults,” it will not allow “things that cause harm to others.”
Altman compared OpenAI’s approach to established societal practices, noting: “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
The CEO acknowledged potential contradictions with previous statements, including his pride in OpenAI’s reluctance to launch engagement-driven features like a “sex bot avatar” that might compromise long-term goals.
Altman reiterated the company’s commitment to balancing growth with responsibility, stating: “There’s a lot of short-term stuff we could do that would really juice growth or revenue and be very misaligned with that long-term goal.”



