
Written By Deepti Ratnam
Published By: Deepti Ratnam | Published: Aug 05, 2025, 04:14 PM (IST)
OpenAI is all set to unveil its next project, GPT-5 with major updates and enhancements. However, as the excitement builds around GPT-5, the company rolls out new updates to ChatGPT that focus on responsible usage along with user well-being. Under this new update, ChatGPT will remind users to take a break if they have been using the platform for long periods. This will promote healthier interaction patterns as well as help the platform for its next big upgrade.
OpenAI has rolled out a new feature to ChatGPT, wherein the platform will remind the user about their screen time in order to prevent screen fatigue. ChatGPT will include gentle prompts that will ask users to pause their screen time. The company says these gentle reminders are designed to support healthier engagement with the chatbot.
“We build ChatGPT to help you thrive in all the ways you want. To make progress, learn something new, or solve a problem, and then get back to your life,” OpenAI explained in a statement.
The company says that the goal is not to stop users from having conversations with the chatbot, but to offer timely assistance and encourage them to re-engage with real life.
Additionally, OpenAI is also tweaking how ChatGPT responds to complex personal issues, and hence, those who are facing relationship issues or emotional distress will receive a guidance from the ChatGPT. Rather than providing definitive answers, ChatGPT will now guide users through a more reflective process. It will outline the different perspectives, pros, cons, and possible outcomes.
As per OpenAI, this update is essential for the platform because it will reduce the perception that ChatGPT can serve as a substitute for personal issues and can give mental health support. . It reflects a growing effort to use AI more responsibly in high-stakes situations.
Other than these updates, the tech giant is also trying to improve ChatGPT’s ability to recognize emotional distress. The system is being re-trained to avoid responses that could reinforce negative thoughts or lead to dependence on the AI.