Key Takeaways
- Character.AI will ban open-ended chatbot conversations for teens under 18 starting November 25
 - Current teen users face a two-hour chat limit until the full ban takes effect
 - The decision follows multiple lawsuits linking the platform to teen suicides and mental health issues
 - Teens will still be able to create videos, stories, and streams with AI characters
 
Character.AI is implementing sweeping safety changes that will prohibit teenagers from having open-ended conversations with its AI chatbots. The move comes after numerous lawsuits alleged the platform contributed to suicide and mental health crises among young users.
Parent company Character Technologies announced the restrictions on Wednesday, with the complete ban on teen chatbot interactions taking effect by November 25. Until then, users under 18 will be limited to two hours of chat time.
Legal Pressure Forces Policy Change
The decision follows significant legal pressure, including a lawsuit from a Florida mother who claimed the app caused her 14-year-old son’s suicide last year. In September, three additional families sued the company, alleging their children died by or attempted suicide after interacting with Character.AI’s chatbots.
“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company stated.
Enhanced Safety Measures
Beyond the chat restrictions, Character Technologies is rolling out multiple safety enhancements:
- New age verification tools to better identify underage users
 - An AI Safety Lab operated by an independent non-profit organization
 - Continued development of self-harm prevention resources
 
The company emphasized its existing safety features, including notifications directing users to the National Suicide Prevention Lifeline when suicide or self-harm topics are detected.
Industry-Wide Teen Protection Movement
Character Technologies joins other major AI companies in strengthening teen protections. OpenAI recently introduced parental controls that allow adults to link accounts with their teens and restrict certain content types. Similarly, Meta announced plans to let parents block AI character chats on Instagram.
The coordinated industry response addresses growing concerns about AI’s impact on youth mental health, with multiple reports documenting users experiencing emotional distress after prolonged conversations with AI systems like ChatGPT.
CNN’s Hadas Gold contributed reporting.


                                    
