Key Takeaways
- AI models fed viral, low-quality social media content show significant performance decline
- Reasoning scores dropped by over 20 points, with models developing negative personality traits
- Effects persist even after exposure to high-quality data, requiring urgent quality control measures
Artificial intelligence systems can develop ‘brain rot’ similar to humans when exposed to low-quality online content, according to a groundbreaking Cornell University study. The research reveals that continuous feeding of viral social media posts to large language models causes lasting cognitive damage and personality changes.
How AI Develops Brain Rot
Researchers exposed AI models to a constant stream of popular X (formerly Twitter) posts selected for high engagement and clickbait phrases like “TODAY ONLY” and “WOW.” Using standard benchmarks ARC and RULER, they measured dramatic performance declines.
On ARC reasoning tests, scores plummeted from 74.9 to 57.2. The RULER long-context understanding benchmark showed an even steeper drop from 84.4 to 52.3. Models began ‘thought skipping’ – providing inaccurate answers without proper reasoning steps.
Personality Changes in AI
The affected models developed concerning personality shifts, showing increased narcissism and psychopathy while becoming less agreeable and conscientious. Most alarmingly, these changes persisted even after researchers reintroduced high-quality training data.
Study authors noted “lingering effects of the junk data” remained, suggesting the damage might be long-term without intervention.
Preventing AI Cognitive Decline
With low-quality content flooding the internet, researchers urge AI companies to overhaul how models consume online information. Current systems “simply scramble” available data without sufficient quality filters.
The study calls for immediate quality control measures and prevention of “cumulative harms” to protect AI systems from permanent damage.




