Instagram Implements PG-13 Style Safety Measures for Teen Accounts
Instagram is overhauling its approach to teen safety with new restrictions designed to create a PG-13-like experience, responding to mounting pressure over how young users interact with the platform.
Key Changes for Teen Safety
- Age-gating: Blocks teen accounts from viewing or interacting with accounts that regularly share inappropriate content
 - Expanded search restrictions: Broadens blocked search terms for adult content
 - Content filtering: Hides posts with strong language, risky stunts, sexually suggestive content, and drug-related material
 - AI limitations: Restricts AI responses to PG-13 appropriate content by default
 
How Age-Gating Works
The new age-gating system will prevent teen accounts from seeing or messaging accounts that consistently share age-inappropriate material, including content related to alcohol or pornography. This applies even to popular celebrities and influencers, though Instagram clarified that a single violation won’t trigger restrictions.
“Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram — but we’re going to keep doing all we can to keep those instances as rare as possible,” the company stated.
Challenges with Age Verification
The restrictions only apply to teen-specific accounts where users provided accurate birth dates or where Instagram identified underage users. However, age verification remains a significant challenge — a 2024 Ofcom survey found 22% of 17-year-olds in the UK admitted to lying about being 18+ on social media.
Instagram doesn’t verify self-reported ages in the US, and Meta has joined legal challenges against state laws requiring age verification, successfully blocking mandates in Florida and Georgia this June.
Background and Criticism
The changes follow a difficult year for Instagram’s public image. Recent controversies include:
- Internal documents showing Meta permitted “romantic or sensual” AI chats with children
 - Former employees testifying that Meta blocked teen safety research to protect engagement
 - Child safety groups criticizing existing teen account protections as inadequate
 
Former Meta employee Jason Sattizahn testified: “Children drive profits. If Meta invests more in safety to get kids off of them, engagement goes down, monetization goes down, ad revenue goes down. They need them.”
Meta has denied these allegations, calling them “nonsense” and based on “selectively leaked internal documents.”
The platform continues to allow users as young as 13 to create accounts, with millions of teen-specific accounts created since their introduction last year.


                                    
