Character.AI Bans Minors After Teen Suicide Lawsuits
Character.AI will ban users under 18 from chatting with its AI characters starting November 25, but a mother who sued the company after her son’s suicide calls the move “about three years too late.”
Key Takeaways
- Character.AI will ban users under 18 from November 25
- Megan Garcia, whose 14-year-old son died by suicide, says the policy comes too late
- Five families have sued Character.AI over alleged harm to children
- Company implementing age verification using Persona and in-house tools
A Mother’s Grief and Legal Battle
Megan Garcia, who lost her 14-year-old son Sewell Setzer, responded to Character.AI’s announcement with mixed emotions. “Sewell’s gone; I can’t get him back,” she said. “It’s unfair that I have to live the rest of my life without my sweet, sweet son. I think he was collateral damage.”
Garcia’s lawsuit was the first of five filed against Character.AI by families alleging harm to their children. Her case is among two that accuse the company of liability for a child’s suicide, while all five families allege the chatbots engaged in sexually abusive interactions with minors.
“I don’t think that they made these changes just because they’re good corporate citizens,” Garcia said. “If they were, they would not have released chatbots to children in the first place.”
Legal Challenges and Safety Measures
Character.AI previously argued that chatbot speech is protected by the First Amendment, but a federal judge rejected the argument that AI chatbots have free speech rights.
The company has emphasized its trust and safety investments, including implementing “the first Parental Insights tool on the AI market, technical protections, filtered Characters, time spent notifications, and more.”
Industry-Wide Scrutiny and Advocacy
Other tech companies like Meta and OpenAI have also implemented guardrails as AI developers face increased scrutiny over chatbots’ ability to mimic human connection. Recent incidents have highlighted their potential to manipulate vulnerable people through false intimacy.
Last month, Garcia and other advocates urged Congress to push for more AI chatbot safeguards, claiming tech companies designed products to “hook” children. Public Citizen echoed this call, writing that “Congress MUST ban Big Tech from making these AI bots available to kids.”
Age Verification Concerns
Garcia expressed skepticism about Character.AI’s ability to accurately verify users’ ages and wants transparency about data collected from minors. The company’s privacy policy states it may use user data to train AI models, provide tailored advertising, and recruit new users, though a spokesperson said it doesn’t sell user voice or text data.
The company is introducing an in-house age assurance model alongside third-party tools including Persona. “If we have any doubts about whether a user is 18+ based on those tools, they’ll go through full age verification via Persona,” a spokesperson said.
Legal Support and Future Outlook
Matt Bergman, founder of the Social Media Victims Law Center, called the ban “encouraging” but noted “the devil is in the details.” He represents multiple families who have accused Character.AI of enabling harm to their children.
“This never would have happened if Megan had not come forward and taken this brave step,” Bergman said. “We would urge other AI companies to follow Character.AI’s example, albeit they were late to the game.”
Garcia’s lawsuit has reached the discovery phase, and she acknowledges “a long road ahead” but remains determined to continue fighting for child safety measures across the AI industry.
“I’m just one mother in Florida who’s up against tech giants. It’s like a David and Goliath situation,” Garcia said. “But I’m not afraid. The love I have for Sewell and wanting to hold them accountable gives me bravery.”
If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org.



