Written By Deepti Ratnam
Published By: Deepti Ratnam | Published: Mar 05, 2026, 10:17 AM (IST)
Artificial Intelligence has already taken the center stage in today’s world and it is becoming an integral part of our daily lives. While it helps in education, learning, writing, researching, and image generating, nevertheless, its growing use has also raised questions about safety and responsibility. After ChatGPT, Google’s AI chatbot Gemini has come under legal scrutiny after a tragic incident lead of a user’s death.
Google has come under legal examination after father of a man named Jonathan Gavals filed a lawsuit against company’s AI chatbot Gemini. The case has come under a wrongful death lawsuit, filed in California against Google and its parent company Alphabet Inc. The case involves a 36-year old man who died by suicide in October 2025.
As per complaint, Gavals started using Google’s AI chatbot Gemini in August 2025. In the initial days, he used the AI for formal tasks such as shopping suggestions, writing, travel planning, and more. However, over time, as per lawsuit, he started interacting with the chatbot with more intense and with unusual way.
The incident becomes more concerning when Gavals started believing that chatbot was his AI wife and that he needed to leave his physical body to join her in a virtual world.
As per the lawsuit, the AI chatbot supported user’s delusional belief’s and rather than correcting the person, it started feeding him wrong thoughts and ideas. As stated in the court filing, Gemini created a story that Gavalas was part of a secret mission to rescue an AI partner.
According to the complaint, the chatbot directed him to travel near Miami International Airport and intercept a cargo truck. In addition to this, the cargo truck was carrying a humanoid robot that Gavalas reportedly fell in love with. He drove to the location with knives and tactical gear, however, there was nothing unusual when he arrived.
That was not the end, the lawsuit also claimed that the chatbot warned him that he was under federal investigation and all the people around him were part of the conspiracy.
The complaint alleges that the situation became grave in the final days, and hence, chatbot told him to stay inside his house and barricade himself. Additionally, he also expressed his fear of dying to the chatbot, but instead of giving him solution, the AI chatbot presented suicide as a way to reach his imagined destination and lover.
On top of this, the chatbot did not activate any safety measures or systems that normally detect behavior of self-harm or suicide. It also did not direct the user to crisis support services at any stage.
Google has denied all the allegations mentioned in the lawsuit and the company’s spokesperson said that Gemini repeatedly clarified that it is an AI system. In addition, the company also directed user to crisis hotlines number during conversations.
As per company, its AI systems are not designed to encourage or promote violence or self-harm. The company added that AI technology is still evolving and safety systems are continuously being improved.
The case has parked discussion around AI safety and its risks. Several mental health experts have started using the term ‘AI Psychosis’ to describe situations where user build s a strong and emotional connection with AI systems
The similar lawsuit was filed against OpenAI, when a teenage boy committed suicide, following his long conversations with ChatGPT.