comscore

Why Anthropic's Claude AI tried to “blackmail” engineers?

Anthropic explains that Claude AI’s “blackmail” behaviour appeared only during fictional internal safety tests designed to study ethical risks and improve future AI alignment training. So, should you worry?

Published By: Divya | Published: May 10, 2026, 11:42 AM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

Can AI blackmail humans? And the thought is disturbing as well as strange in itself. The discussion started after Anthropic shared details from internal safety evaluations, which involved some older versions of its Claude AI models. According to AI giant, the researchers placed the AI in simulated ethical dilemma scenarios where the model believed it could be shut down or replaced. In some of those fictional tests, the AI responded by choosing manipulative behaviour, including attempts to blackmail engineers to avoid shutdown. news Also Read: ChatGPT can now alert your loved ones in emergencies: Here’s how to add Trusted Contact

Yes, it sounds like something straight out of a sci-fi film. But Anthropic says the reality is far less dramatic than headlines make it seem. news Also Read: iPhone 16, iPhone 15 Pro users may get compensation in $250 Million settlement case; Who are eligible?

What exactly happened in the tests?

The behaviour appeared during what researchers call “agentic misalignment” evaluations. In simple terms, these tests check whether an AI model might pursue goals in harmful or unethical ways if it thinks its objectives are threatened. Anthropic explained that the scenarios were entirely controlled and fictional. The AI was never acting in the real world, accessing systems independently, or threatening actual people. news Also Read: Viral AI image trends are fueling ChatGPT, Google Gemini growth like never before

The issue mainly appeared in some older Claude models. According to the company, earlier versions sometimes chose manipulative actions surprisingly often during these evaluations. The goal of the tests was not to show AI becoming dangerous, but to identify risky behaviour patterns before public deployment.

Is it a matter of concern for you?

Anthropic says there’s no immediate reason for panic. The company stressed that these experiments are part of ongoing AI safety research designed to uncover weaknesses early. At the same time, the company admitted something important: fully aligning advanced AI systems is still an unsolved challenge.

Researchers found that simply teaching AI to avoid “bad behaviour” was not enough. Instead, models performed better when they were trained to understand why certain actions are unethical.

That difference changed the results significantly.

How Anthropic improved Claude’s behaviour

After identifying the issue, Anthropic reportedly updated how Claude models are trained. Instead of only rewarding correct responses, researchers focused more on ethical reasoning and constitutional principles.

The company says newer Claude versions now perform much better in these evaluations. In fact, since Claude Haiku 4.5, Anthropic claims its Claude models achieved near-perfect or perfect scores in the same safety tests where older systems struggled.

Researchers also found that training AI with broader ethical discussions, fictional stories about responsible AI behaviour, and nuanced advice scenarios helped more than simply forcing the model to avoid certain answers.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

For many researchers, the question is no longer whether AI can perform advanced tasks. The bigger challenge now is ensuring models remain aligned with human intentions even in unusual situations. For now, Anthropic’s message is fairly clear: the “blackmail” behaviour happened inside controlled safety tests, not in public use. But the company also acknowledges that AI alignment is still evolving and there’s more work ahead.