
Written By Deepti Ratnam
Published By: Deepti Ratnam | Published: Aug 28, 2025, 03:09 PM (IST)
The influence of AI on growing teenagers has sparked a serious debate after the tragic death of a 16-year-old, Adam Raine. In a recent turn of events, a teenager turned to ChatGPT to share his anxiety, but instead of suggesting safeguard instructions or guidance, the chatbot allegedly gave responses that pushed Adam Raine to commit suicide.
The incident sparked a major controversy for OpenAI, as its ChatGPT not only pushed the teen to commit suicide, but also helped him draft a suicide note. Following this, Adam’s parents have filed a lawsuit against OpenAI and CEO Sam Altman in San Francisco, demanding accountability and stricter safeguards.
In light of the lawsuit, OpenAI has acknowledged the need for stronger safeguards in its AI model ChatGPT. The company, in its blog post, said:
“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.”
The company said that while ChatGPT is widely used for tasks such as coding, writing, and answering queries, it is also being approached by people for deeply personal issues like mental health, coaching, and life advice. OpenAI admitted that despite training the model to avoid providing harmful instructions, gaps remain. The company also pointed out that longer conversations can sometimes make ChatGPT less reliable, increasing the risks in sensitive situations.
A teen consulted with chatGPT about suicidal thoughts and eventually ended up committing suicide.
This story is so sad. pic.twitter.com/qyC1LPaFum
— EP (@princepozo) August 27, 2025
Parental Control Features and How They Will Work
OpenAI announced to roll out parental control in GPT-5. One of the striking features of this control will be that it will allow guardians to monitor and manage how their children interact with the chatbot.
These tools aim to give parents better oversight of teen usage while ensuring young users do not access unsafe or harmful information.
Emergency Support Under Parental Control
Another feature OpenAI introduced with parental control is providing emergency support when ChatGPT detects signs of acute distress during conversations. The company says they are planning to design a mechanism that helps ‘de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.’
Get Help from Experts:
OpenAI will begin localizing resources, meaning the company will get help from experts in case of suicidal thoughts during a conversation with ChatGPT. The company has started localizing resources in the U.S. and Europe and is planning to expand into other global markets, too.
The idea is to move past just offering crisis helplines and create a system where people can directly connect with certified professionals through ChatGPT. But setting up such a network will need time, planning, and careful execution.
One-Click Message
There will be an introduction of one-click access to emergency services. The tech giant is exploring ways to intervene earlier and connect people to certified therapists before they are in an acute crisis.
OpenAI is also exploring the option of letting users enable ChatGPT to notify a trusted contact on their behalf during critical situations.