The story of Jeffrey Epstein continues to take new turns, and this time it has pulled one of the world’s biggest tech companies into a serious legal fight. According to CNBC, a group of Epstein survivors has filed a lawsuit against Google, claiming that its AI tools and search engine exposed their identities and personal contact details online, leading to harassment and fear.
The case has been filed in a US federal court by a survivor using the name Jane Doe, who is representing others in a similar situation. At the centre of the complaint is a claim that highly sensitive personal information, including names, email addresses and phone numbers, ended up being displayed through Google’s platforms, even after efforts were made to remove it.
Epstein victim contact details leaked: How the information got out
The lawsuit points to a large release of documents by the US Department of Justice in late 2025 and early 2026. According to the complaint, around 100 Epstein survivors were unintentionally identified in those records. While the government later admitted the mistake and tried to withdraw the material, the information had already made its way online.
What has upset the victims is what happened next. The lawsuit claims that even after the issue came to light, platforms like Google continued to show that information in search results and AI-generated answers. The survivors say they repeatedly asked for the content to be taken down, but it kept appearing.
“Survivors now face renewed trauma,” the suit says. “Strangers call them, email them, threaten their physical safety, and accuse them of conspiring with Epstein when they are, in reality, Epstein’s victims.”
Google AI tool under spotlight
A key part of the complaint focuses on how Google’s AI-powered features respond to user questions. Unlike a traditional search result that lists links, these tools generate direct answers and that is where the problem lies, according to the plaintiffs.
The lawsuit claims that Google’s AI Mode included personal details in its responses. In one example mentioned in the filing, the system allegedly showed a victim’s full name, displayed her email address, and even created a clickable link that allowed people to contact her directly.
The survivors argue that this is not just a case of showing existing content, but actively presenting sensitive information in a way that makes it easier to access and spread.
This case also brings attention to Section 230, a US law that has long protected internet companies from being held responsible for content posted by others. For years, this protection has helped platforms avoid legal trouble over what appears on their sites.
Now, the survivors are questioning whether that protection should apply when AI systems generate answers on their own. The lawsuit claims Google’s design played a role in spreading the information and says the system is “not a neutral search index.”
At the same time, tech companies are already facing growing pressure over harmful content online, including deepfake images and other forms of misuse. Cases like this are adding to that pressure.


