Meta Introduces Parental Controls for Kids’ AI Chatbot Interactions
Meta will launch parental controls early next year allowing parents to disable one-on-one chats between children and AI characters entirely. The announcement comes amid growing scrutiny over AI interactions with minors.
Key Takeaways
- Parents can block all AI character chats or specific chatbots
- Meta’s core AI assistant cannot be disabled
- Parents get insights into AI conversations but not full chat access
- PG-13 content restrictions now apply to both Instagram and AI chats
What Parents Can Control
Starting early next year, parents will have multiple options to manage their children’s interactions with AI chatbots. The most significant control allows completely turning off private chats with AI characters.
However, Meta’s primary AI assistant will remain active regardless of these settings. The company states this assistant provides “helpful information and educational opportunities” with built-in age-appropriate protections.
For parents preferring more targeted approaches, blocking specific chatbots will also be possible. Meta will provide conversation insights without granting full chat access.
Broader Safety Context
These changes arrive as Meta faces continued criticism about platform safety for young users. AI chatbots specifically face legal challenges, with some lawsuits alleging they contributed to teen suicides.
Despite concerns, adoption remains high. A Common Sense Media study found over 70% of teens have used AI companions, with half being regular users.
Expanded Content Restrictions
Meta recently implemented PG-13 content limitations for teen Instagram accounts, preventing exposure to sexual content, drugs, or dangerous stunts. These restrictions now extend to AI chatbot interactions as well.
Teen accounts cannot modify these settings without parental permission, creating a default safety layer.
Advocacy Response
Children’s safety organizations expressed skepticism about Meta’s motivations.
“From my perspective, these announcements are about two things. They’re about forestalling legislation that Meta doesn’t want to see, and they’re about reassuring parents who are understandably concerned about what’s happening on Instagram,” said Josh Golin, executive director of non-profit Fairplay.
The new controls represent Meta’s latest attempt to address child safety concerns while navigating increasing regulatory pressure.



