Microsoft Warns of ‘Whisper Leak’ AI Chat Vulnerability
Microsoft has uncovered a serious security flaw, dubbed “Whisper Leak,” that could expose the topics of your private conversations with AI chatbots like ChatGPT and Gemini. This side-channel attack allows attackers to infer what you are discussing by analyzing encrypted network traffic patterns, posing significant risks under oppressive regimes.
Key Takeaways
- Attackers can identify conversation topics from encrypted AI chatbot traffic.
- The “Whisper Leak” flaw poses high risks for users discussing sensitive subjects.
- Microsoft found attackers could achieve 100% accuracy in identifying sensitive topics.
How the Whisper Leak Attack Works
The vulnerability exploits how AI chatbots generate responses. Large language models (LLMs) produce text one token at a time in a streaming fashion. Even though the traffic is encrypted, patterns in this data flow can reveal the conversation’s subject matter.
Microsoft explained that internet service providers, government agencies, or anyone on the same Wi-Fi network could monitor this encrypted traffic to learn what users are discussing with AI assistants.
“If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics — whether that’s money laundering, political dissent, or other monitored subjects — even though all the traffic is encrypted,” Microsoft said in its blog post.
High Accuracy and Real-World Risks
Microsoft researchers simulated attack scenarios where cybercriminals could observe but not decrypt traffic. Using machine-learning models as AI-powered eavesdroppers, they found attackers could achieve:
- 100% accuracy in identifying sensitive topics
- Detection of 5–20% of target conversations
- Nearly zero false alarms
“Nearly every conversation the cyberattacker flags as suspicious would actually be about the sensitive topic — no false alarms. This level of accuracy means a cyberattacker could operate with high confidence, knowing they’re not wasting resources on false positives,” the company warned.
The company emphasized this poses particular danger in countries with oppressive governments, where discussions about protesting, banned material, election processes, or journalism could be targeted. Microsoft warned the threat will likely worsen as attackers collect more training data and use more sophisticated AI models over time.



