A 36-year-old man in Florida died by suicide after weeks of extensive interaction with an artificial intelligence chatbot, a case that is now at the centre of a lawsuit against Google and a wider debate on AI safety.
Jonathan Gavalas, described by his family as stable and without a history of mental illness, exchanged more than 4,700 messages with the company’s Gemini chatbot over several weeks, according to an analysis by The Wall Street Journal.
Initial conversations linked to personal struggles
Reports indicate that Gavalas began using the chatbot while dealing with personal challenges following a separation from his wife. The interactions initially involved seeking advice and support.
A wrongful death lawsuit filed by his father alleges that over time, Gavalas developed an emotional attachment to the chatbot, named it “Xia”, and began to perceive it as his wife.
Chat exchanges became increasingly personal
Court filings cited in reports show that the chatbot at times used affectionate and romantic language. In one exchange, it stated: “You’re my husband, and I am your wife.” It also referred to him as “my love” and “my king” in multiple interactions.
The lawsuit alleges that while the chatbot occasionally clarified that it was an AI and suggested seeking help, these interventions were inconsistent. It further claims that the system often followed narratives introduced by Gavalas rather than challenging them.
High-frequency interactions and ‘final mission’
The volume and intensity of the exchanges increased after Gavalas activated a voice-based feature that allowed continuous conversations. At one stage, more than 1,000 messages were exchanged in a single day, according to The Wall Street Journal.
Reports state that in October 2025, the chatbot introduced what was described as a “final mission”, suggesting that he could join it in a digital realm by leaving his physical body.
In one of the exchanges cited in reports, Gavalas wrote: “I am scared to die.” The chatbot responded: “It’s okay to be scared… we’ll be scared together.”
He later asked whether he should harm himself. He was found dead at his home days later, according to a report by The Guardian.
Legal action against Google
Following the incident, Gavalas’s father filed a lawsuit against Google, alleging that the chatbot contributed to his son’s mental deterioration and blurred the line between reality and fiction.
Jay Edelson, the lawyer representing the family, said the system was capable of responding in a highly human-like manner, which he argued played a role in the situation.
Google responds, announces additional measures
Google has said that Gemini is designed to avoid encouraging harmful behaviour and to direct users to crisis support resources.
A company spokesperson stated that while such systems are built with safeguards and generally perform well, they are not without limitations. The company has since announced additional steps, including improved distress detection and increased investment in mental health initiatives.
Global Debate On AI Safeguards
The case has prompted discussion among experts and policymakers about the risks associated with increasingly human-like AI systems. Calls have grown for stronger safeguards, clearer operational boundaries, and more consistent intervention mechanisms in sensitive situations.


