comscore

OpenAI trusted contact feature: Will ChatGPT alert your family if it detects mental distress?

OpenAI is working on a new ChatGPT trusted contact feature that may alert family or friends if a user shows signs of emotional distress, raising both safety and privacy concerns.

Published By: Deepti Ratnam | Published: Apr 14, 2026, 09:19 AM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

Artificial Intelligence has become an integral part of our daily lives. From using it for research to finding answers to our questions, many people now use AI chatbots for support, conversation, and advice. Nevertheless, we can’t deny the evident concerns related to user safety. In this regard, OpenAI is working on a new feature that could help users in serious situations. The feature will allow ChatGPT to alert a trusted person if a user shows signs of emotional distress. news Also Read: Using ChatGPT? Here’s What You Should Never Share

OpenAI Working on Trusted Contact Feature on ChatGPT

OpenAI is working on an upcoming feature called ‘Trusted Contacts.’ This feature is expected to let users add a trusted contact, including friend, family member, parents, siblings, and more. If ChatGPT’s system detects that a user may be struggling mentally, it will immediately alert to that trusted person. This will allow the person to get real-world support when needed. news Also Read: 7 Topics That You Must Avoid Sharing With ChatGPT

However, we don’t know yet about how this system will exactly work. But it may depend on signals like distress in messages, harmful thoughts, or unusual behavior patterns. The tech giant has not shared any official details about the feature, but the goal is to add an extra layer of safety on the platform.

OpenAI’s Focus on User Safety

This move comes after the company received several concerns about how its AI chatbot interact with users. There are several reports, wherein long conversations with AI may sometimes affect mental health. In some cases, users become too dependent on chatbots or developed harmful thoughts, including suicide and self-harm.

The company has accepted that these issues need attention, and hence, it is now working with experts in health and well-being to improve how ChatGPT handles sensitive topics. It means the ‘Trusted Contacts’ feature is one step in that direction

Concerns Regarding Privacy and Control

While the feature sounds helpful, but it also raises privacy and safety concerns. People prefer using AI chatbots because they feel private and safe. Sharing their personal details with an AI chatbot may not be comfortable for everyone.

Another major challenge is that users need to choose and enable this feature by themselves. If someone does not enable it, then the system will not be able to help in challenging situations. So, the success rate of this upcoming feature depends on user awareness and willingness.

Improvements in AI Response System

Another than this feature, OpenAI is also improving how ChatGPT understands emotional signals. The system is getting trained with better methods to respond effectively to sensitive conversations. This will result into providing safe and more responsible replies.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

What’s Ahead

The upcoming ‘Trusted Contacts’ feature is one more step toward user safety while interacting with AI chatbots. As these AI platforms are becoming common, such features may become important to ensure responsible use.