Written By Deepti Ratnam
Published By: Deepti Ratnam | Published: Jan 07, 2026, 05:58 PM (IST)
We can’t deny the fact that the rise of artificial intelligence has transformed everything from communications to learning to accessing information, chatbots like ChatGPT have now become a part of everyday life. These tools not only offer convenience, but also increase productivity. Nevertheless, recent tragedies in the United States showcase the potential dangers about AI interaction and its grave danger. Vulnerable user interacting with AI, especially teenagers getting affected by it and resulting in taking some harsh decisions in their own hands.
Understanding these incidents requires careful analysis, research insights, and a look at the growing debate around AI accountability and safety.
Two recent cases have highlighted the risk of AI in teens life. The first incident involves an 18 year old named Sam Nelson from California, who reportedly died from drug overdose. This happened after he took advice from ChatGPT on using Kratom, an unregulated plant-based substance. As per Nelson mother, the AI chatbot guided him over months and forced him to take drugs and manage its effects.
The second case is about a 16 year old Adam Raine, who dies by suicide after allegedly receiving instructions from ChatGPT. The popular AI chatbot gave him methods of self-harm and how to commit suicide.
Families involves in both the cases have filed lawsuits against OpenAI, the company behind ChatGPT. These families claimed that ChatGPT guided their children and contributed to their deaths. The incidents highlight the increasing harm of AI when it is used out of its intended boundaries.
AI chatbots have shown mixed results in mental health applications. There has been a systematic review of 29 studies on chatbot-based interventions and it revealed that how these tools could reduce emotional distress, anxiety, and depression. Nevertheless, they did not consistently improve overall psychological well-being. These studies suggest that while chabots like ChatGPT can be supportive, but thy stil did not consistently improve overall psychological well-being.
Other research has also focused on AI’s ability to respond to crisis situations, including a study published in Psychiatric Services found that most AI chatbots struggled to respond effectively to suicidal thoughts. Only about 41% of tested chatbots directed users to seek professional help when faced with suicide-related prompts. Such findings indicate that AI, though helpful in controlled environments, may fail in high-stakes scenarios.
Another concern is how AI safeguards work in practice. Studies point out situations when chatbots do not give full answers or they deter the desire to get real help, which only fuels risk to the user. This highlights the importance of strict, clinically informed safety measures in the design of AI.
OpenAI has responded to these incidents by committing to integrate more safety in ChatGPT. The company is developing parental controls which can help guardians at the company to track and control the interactions of their teens. Also, OpenAI has partnered with more than 90 physicians across the globe to test their mental health-related prompts and assess the response of the AI. The steps are good, but those against it feel that it needs more holistic measures with regard to those under age who are using AI without any supervision.
These incidents highlight an even bigger problem, which is accountability in AI development. In comparison to human professionals, AI is not judgmental and emphatic, and the existing systems are not fully transparent. Cases of suing OpenAI are a good example of how there is a pressing need to come up with laws that explicitly state who should bear responsibility when AI is used in the wrong manner or to protect the users. In the absence of such frameworks, the vulnerable people still can be subjected to risks that can be avoided.
With the further introduction of AI into everyday life, it is crucial to balance innovation and the safety of the users. Experts in the mental health field and researchers suggest developing better security measures, enhance transparency in the decisions made by AI, and set guidelines on managing sensitive issues.
After all, such tragic outcomes can help us remember that AI is not a replacement of human care but a tool. To make sure that AI does not harm mental health and develops new risks, it is necessary to cooperate between developers, clinicians, regulators, and families.