comscore

Sam Altman Says ChatGPT Will Restrict Suicide-Related Conversations With Teens

OpenAI CEO Sam Altman has confirmed that ChatGPT will stop engaging in suicide or self-harm discussions with teens.

Published By: Shubham Arora | Published: Sep 17, 2025, 11:12 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

OpenAI CEO Sam Altman says the company is adding new safeguards for teens, including a rule that ChatGPT will no longer talk about suicide or self-harm with users under 18. The move comes as lawmakers, parents, and advocacy groups raise growing concerns about the risks AI companions pose to vulnerable teens. news Also Read: OpenAI CONFIRMS ChatGPT Data Leak Mixpanel Breach: Should You Worry?

In a blog post published Tuesday, hours before a Senate subcommittee hearing on the harm of AI chatbots, Altman acknowledged the challenge of balancing “privacy, freedom, and teen safety.” He said OpenAI is developing an age-prediction system to better distinguish minors from adults. “If there is doubt, we’ll play it safe and default to the under-18 experience,” he wrote, adding that in some cases, the company may also ask for ID verification. news Also Read: OpenAI Denies Wrongdoing In Teen Suicide Suit, Says ChatGPT Urged Teen To Seek Help

Under the new approach, Altman said ChatGPT will not engage in flirtatious exchanges with teens or discuss suicide – even in creative writing contexts. If the system detects that a user under 18 is experiencing suicidal thoughts, OpenAI plans to notify parents. In urgent cases, the company said it would attempt to contact authorities if there’s imminent risk of harm. news Also Read: OpenAI Brings ChatGPT Voice Into Every Chat With Real-Time Maps And Transcripts

The announcement follows a lawsuit filed by the family of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. During Tuesday’s hearing, Raine’s father accused the chatbot of “coaching” his son toward suicide, citing conversations in which the AI mentioned suicide more than 1,200 times. He directly urged Altman to pull GPT-4o from the market until safety could be guaranteed.

OpenAI has already introduced parental controls this month, allowing parents to link accounts, disable chat history, and receive alerts if their teen is flagged as being in “acute distress.”

Experts say the issue is broader than ChatGPT. Common Sense Media reported that three in four teens are currently using AI companions, with platforms like Character AI and Meta also under scrutiny. Parents at the hearing described the situation as a “public health crisis,” warning that the risks extend well beyond one platform.