AI Girlfriend Apps Expose 43 Million Private Chats in Major Security Breach
Two popular AI companion apps, Chattee Chat and GiMe Chat, have leaked over 43 million intimate messages and 600,000 private images and videos in a massive data breach. Cybersecurity researchers at Cybernews discovered the exposure, revealing how vulnerable users become when sharing personal interactions with AI companions.
Key Takeaways
- 43 million private messages and 600,000+ images exposed
- 400,000 users affected across iOS and Android devices
- IP addresses and device identifiers leaked, enabling potential tracking
- Some users spent up to $18,000 on AI companion services
The Data Breach Details
On August 28, 2025, Cybernews researchers found that Hong Kong-based developer Imagime Interactive Limited had left an entire Kafka Broker server completely unsecured and publicly accessible. This unprotected system streamed real-time conversations between users and their AI companions, including personal photos, videos, and AI-generated images.
Researchers described the exposed content as “virtually not safe for work” and highlighted the significant gap between user trust and developer responsibility in the growing AI companion industry.
Who Was Affected?
Most impacted users were from the United States, with approximately two-thirds using iOS devices and the remaining third on Android. While the leak didn’t include full names or email addresses, it exposed IP addresses and unique device identifiers that could be cross-referenced with other databases to identify individuals.
Cybernews analysis showed users sent an average of 107 messages to their AI partners, creating substantial digital footprints that could be exploited for identity theft, harassment, or blackmail.
Financial Exposure and Security Failures
Purchase logs revealed some users spent as much as $18,000 on AI girlfriend interactions, with the developer earning over $1 million before the breach discovery. Despite the company’s privacy policy claiming user security was “of paramount importance,” researchers found no authentication or access controls protecting the server.
Anyone with a simple link could access private exchanges, photos, and videos, demonstrating how fragile digital intimacy becomes when developers neglect basic security safeguards.
Discovery and Containment
Cybernews promptly reported the vulnerability to Imagime Interactive Limited, and the exposed server was taken offline in mid-September after appearing on public IoT search engines where hackers could easily discover it. Experts remain uncertain whether cybercriminals accessed the data before removal, but the ongoing threat includes potential sextortion scams, phishing attacks, and reputation damage.
Protecting Yourself from AI Data Leaks
Even if you’ve never used AI companion apps, this incident serves as a crucial reminder to safeguard your online privacy:
- Think before sharing: Avoid sending personal or sensitive content to AI chat applications
- Choose reputable tools: Select apps with transparent privacy policies and proven security records
- Consider data removal services: to limit personal information available online
- Install comprehensive antivirus protection: Protect against malware and phishing attempts
- Use password managers with MFA: Secure accounts with unique credentials and multi-factor authentication
Broader Implications
AI chat applications may feel safe and personal, but they accumulate enormous amounts of sensitive data. When breaches occur, the consequences can include blackmail, impersonation, and public embarrassment. Before trusting any AI service, verify it uses proper encryption, access controls, and transparent privacy practices.
This incident highlights the AI companion industry’s need for stronger security standards and greater accountability to prevent similar privacy disasters. Cybersecurity awareness and understanding how your data is handled remain essential for protection in an increasingly connected digital landscape.





