comscore

Pentagon vs Anthropic: Hours after Trump’s AI ban, US military reportedly used Anthropic in Iran operation

The Pentagon VS Anthropic dispute deepens after reports claim US forces used the AI tool during an Iran strike, hours after Trump announced a federal ban.

Published By: Deepti Ratnam | Published: Mar 02, 2026, 09:58 AM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

The usage of artificial intelligence applications in military affairs is a new topic of discussion after the United States and Israel attacked Iran. According to a recent report, the US Army deployed AI tools developed by Anthropic when jointly striking Iran. This was reportedly occurring only hours following the announcement by the then-US President Donald Trump to ban the federal use of the company technology. The creation has cast doubt on the AI policy in the military and government regulations.

How AI Tools Were Used in the Iran Operation

Reports indicated that the Anthropic Claude AI tool was used in operations associated with Iran, using commands such as United States Central Command. Intelligence tests, target recognition, and battle simulations are some of the uses of these tools. The authorities failed to establish the precise systems. Nonetheless, according to the sources accustomed to what is going on, AI tools are entrenched within military procedures.

The timing is important. These announcements of making use of AI tools were shortly after Donald Trump publicly announced that the US government would no longer work with Anthropic. This raised the question of whether the order was put into immediate effect or not.

Anthropic vs Donald Trump Administration

Trump indicated that the government no longer would be doing business with Anthropic. Defense Secretary Pete Hegseth also threw some blame on the company and termed it as a possible supply chain threat. The administration alleged that Anthropic had not granted military access to its AI systems without restriction by a certain deadline.

Under the leadership of CEO Dario Amodei, Anthropic did not take down some safety precautions of its AI products. The firm indicated that such precautions are in place to avoid abuse. It further indicated that the move of the government was not legally healthy and that it was going to appeal against the ruling.

How Does It Impact Military AI Policy

The conflict between the Pentagon and Anthropic may transform the rules of AI in the military. In case the government puts companies under pressure to eliminate safety thresholds, it can increase the application of AI in defense tasks. By ensuring that companies keep a rigid protection, it will restrict some of the uses of the military.

Anthropic demands mass surveillance protection and completely autonomous weapons. According to the company, these limits are necessary in responsible AI use in defense.

National Security and Broader AI Policy Making

This conflict shows an increasing tension between technology companies and the state institutions. The military departments are in need of adaptable AI tools. Companies are concerned with safety requirements. The future defense contracts and AI policies might be affected by the outcome.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

The problem is not just a matter of a single company. It is also indicative of a greater controversy regarding the application of artificial intelligence in the operations of national security.