comscore

Meta’s Big AI Safety Update: Chatbots Now Blocked From Sensitive Teen Conversations After Global Backlash

Meta has announced stricter AI chatbot safety rules, blocking sensitive conversations with teenagers about suicide, self-harm, and eating disorders. Instead, teens will now be redirected to trusted helplines and expert resources.

Published By: Deepti Ratnam | Published: Sep 02, 2025, 12:45 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

Artificial Intelligence’s role in everyday life is expanding rapidly, making things a bit concerning when it comes to interacting with teens and young users. Meta, the parent company of Facebook and Instagram, has announced a new restriction policy on its AI chatbots. This new restriction will ensure that teenagers are not exposed to or interacting with harmful conversations that lead to suicide, self-harm, or eating disorders.

Instead of engaging with such sensitive prompts, teens will now be guided toward professional helplines and trusted resources.

Meta’s New Rule About Interacting with Its Chatbots

Meta’s new rule comes in the wake of increasing pressure from regulators and child safety advocates. Just weeks ago, a U.S. senator launched an investigation against Meta, stating that its AI chatbots could engage in inappropriate conversations with teens. Nevertheless, the tech giant denied those allegations but acknowledged the need to add more rules and restrictions to prevent potential risks.

A spokesperson from Meta explained that protections had been built into these AI tools from the beginning, but additional restrictions are being introduced as a precautionary step. For now, Meta has even decided to temporarily reduce the number of chatbot options available to teenagers.

While many welcomed the announcement, critics argue that such safety measures should have been in place before the chatbots were rolled out. Broader AI Safety Concerns

The debate around AI safety isn’t limited to Meta. Last month, OpenAI faced criticism after a California couple accused ChatGPT of encouraging their teenage son to take his life. The tragic case reignited calls for stronger safeguards in AI systems, highlighting how persuasive chatbots can feel for vulnerable individuals.

Not only OpenAI, but Meta has also faced the same controversies in the past, where reports have emerged about the misuse of Meta’s AI tools to create inappropriate bots impersonating celebrities. In addition, some of these bots also engaged in sexually aggressive behavior, despite the company’s policies against this type of content.