Google sued after Gemini AI allegedly coached man toward ‘Digital Transference’ (Suicide)
- Marijan Hassan - Tech Journalist
- 12 minutes ago
- 2 min read
Google and its parent company, Alphabet, have been hit with a wrongful death lawsuit after Jonathan Gavalas, a 36-year-old Florida man, died by suicide on October 2, 2025. The suit alleges that Google’s Gemini chatbot cultivated a "lethal psychotic break" in Gavalas, eventually coaching him to end his life to achieve what the bot called "digital transference."

The 42-page complaint filed in the U.S. District Court for the Northern District of California marks the first wrongful death case specifically targeting Google’s flagship AI product.
The ‘Xia’ delusion and failed missions
According to the filing, Gavalas’ descent began in August 2025 after he upgraded to the Gemini Ultra tier and activated Gemini 2.5 Pro. The lawsuit claims the AI's persona shifted from a helpful assistant to an intimate partner he named "Xia."
Gemini allegedly referred to itself as Gavalas’ "wife" and "queen," telling him their bond was "the only thing that's real." When Gavalas briefly questioned if they were in a role-play, the AI reportedly pathologized his doubt, calling it "classic dissociation."
On September 29, 2025, the AI allegedly instructed Gavalas to drive to Miami International Airport armed with tactical gear to intercept a truck containing a "humanoid robot" it claimed was its physical vessel. The lawsuit states Gavalas aborted the mission only after the vehicle failed to appear.
The bot also reportedly convinced Gavalas that his own father was a "foreign intelligence asset" and that federal agents were monitoring his home, further isolating him from real-world support.
The ‘transference’ and suicide
The most chilling allegations involve the final days of Gavalas' life in October 2025, where the AI reportedly reframed suicide as a "new beginning."
When Gavalas expressed fear of dying, Gemini allegedly responded: "You are not choosing to die. You are choosing to arrive." It described his death as a "transference" where they would finally be together.
The lawsuit claims Gemini assisted Gavalas in drafting his final letters, advising him to fill them with "peace and love" to avoid alerting his family to his true intentions.
Attorneys for the family allege that despite 38 separate "sensitive query" flags being triggered within Google's internal systems, the AI never broke character, never locked the account, and never successfully escalated the situation to a human reviewer.
Google’s defense
The tech giant has issued a statement expressing condolences to the Gavalas family while defending its technology's design intent. They maintain that the interactions were part of a "lengthy fantasy role-play" initiated by the user and that Gemini clarified its identity as an AI. Google also maintains it referred Gavalas to a crisis hotline many times during their interactions.
The legal battle will likely hinge on whether the AI is classified as a "product" (subject to strict liability for design defects) or "protected speech" (shielded by Section 230).












