comscore

Anthropic AI safety chief steps down, raises concerns about AI future

Anthropic’s safeguards research head Mrinank Sharma has resigned, warning that the world faces interconnected risks. Here is what he has said.

Published By: Divya | Published: Feb 10, 2026, 05:46 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

There might be quick development in the AI world, but the topic of safety has been there as it is. This fear of safety has ignited even more with the latest resignation of Anthropic’s head of safeguards research, Mrinank Sharma. Anthropic’s head of safety research has announced that he has stepped down from his role, and his resignation letter is raising more questions than answers. news Also Read: 'Patil Effect' Unleashed: How one Indian mind sent shockwaves through wall street and left big investors terrified

Sharma shared the announcement in a post on X, where his carefully worded message quickly caught attention. Without naming any specific incident, he hinted at deeper concerns around values, responsibility, and the direction in which the world, and possibly AI, is heading. news Also Read: Anthropic’s Claude Opus 4.6 AI model brings bigger context and smarter reasoning

In his letter, Sharma wrote that it had become clear to him that “the time has come to move on,” adding that the world is “in peril,” not only because of artificial intelligence but due to a broader set of interconnected crises unfolding simultaneously. He further noted that humanity may be approaching a point where wisdom must grow alongside technological capability, otherwise the consequences could be serious. news Also Read: How an Anthropic AI release triggered a $285 billion stock sell-off

While the note did not directly criticise Anthropic, Sharma acknowledged the difficulty of allowing values to consistently guide decisions, writing that organisations often face pressure to set aside what matters most.

What Sharma Worked On At Anthropic

Sharma joined Anthropic in 2023 and led the company’s safeguards research team, which focused on reducing risks linked to advanced AI systems. His work reportedly included developing defences against AI-assisted bioterrorism, studying chatbot behaviour such as excessive flattery, often called AI sycophancy, and researching how AI interactions could influence human perception.

In fact, a recent study published by Sharma suggested that chatbot conversations may sometimes create a distorted sense of reality for users. The resignation comes shortly after Anthropic rolled out an upgraded AI model and reportedly explored fresh funding that could significantly raise the company’s valuation. Online reactions have linked Sharma’s departure to the growing tension between rapid product development and long-term safety priorities, though no confirmation supports this theory.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

Notably, Sharma is not the only researcher to leave the company in recent weeks, with a few other senior names also announcing exits to pursue new ventures.