Written By Deepti Ratnam
Published By: Deepti Ratnam | Published: Jan 20, 2026, 12:06 PM (IST)
OpneAI and its popular chatbot ChatGPT is once again in the centre of a shocking lawsuit. The tech giant comes under scrutiny after a tragic murder-suicide case. The incident involves a 56-year old man who not only committed suicide but killed his 83-year old mother too. The case raises questions about how AI is influencing and increasing potential risks to vulnerable individuals and youngsters.
A lawsuit has been filed against OpenAI and ChatGPT, claiming that a 6-year-old Stein-Erik Soelberg spent hours daily on ChatGPT, talking to it and interacting with it. This interaction led to killings when Soelberg, who suffered from mental illness, developed increasingly paranoid beliefs that his mother was trying to harm him. The estate argues in the lawsuit that the man was under the influence of ChatGPT, and the AI chatbot reinforced false beliefs in him. This eventually led him to murder his 83-year old mother and then committed suicide.
The legal lawsuit filed by the estate of Suzanne Eberson Adams from Connecticut mention legal action names OpenAI, CEO Sam Altman, and Microsoft too. To recall, Microsoft is a major investor in OpenAI’s business. The lawsuit and lawyers involved in the case said that the chatbot engaged with the user’s delusions in a way that appeared affirming and authoritative, potentially contributing to the fatal outcome.
According to Soelberger family, they observed the obvious indications of his declining mental health condition, such as withdrawal, unpredictable behavior, and paranoid utterances. These warnings notwithstanding, they never really understood how fully dependent he was on ChatGPT or what conversations he was engaging in.
Following the killings, the son of Soelberg found social media videos of his father scrolling through protracted conversations with the AI.
Elon Musk highlighted the case on X with the term ‘diabolical’, and advised AI not to justify delusions. He highlighted that the AI systems must enhance the truth and direct users to safety over the belief in harmful practices. The misgiving mentioned by Musk is more relevant to the wider problem of AI safety, especially when used by a person with psychosis or paranoia.
The OpenAI reported that it was heartbreaking and that it is looking into the lawsuit. The company claimed that ChatGPT would help to alleviate emotional distress and promote real-life support in users. Experts note that the case is significant due to its claim of the third-party harm where an AI system is believed to have contributed to violence against a third-party other than the user.