Anthropic is being banned by the US government especially the Pentagon under Trump and Secretary of Defense Pete Hegseth, labeled as a supply chain risk, and having its contracts isolated because they insist on upholding ethical red lines in AI.

This is a real event, occurring between January and March 2026, and is shaking the AI world.

In short Anthropic the company that developed Claude once signed a contract worth approximately $200 million with the Pentagon, and Claude was even used in several military operations such as intelligence support in Venezuela and Iran.

The Pentagon demanded any lawful use, including mass domestic surveillance of US citizens and fully autonomous lethal weapons AI that decides to kill without human intervention. Anthropic CEO Dario Amodei flatly refused. They support the US military's use of AI for intelligence and combat, but flagged these two things as violating democratic values and posing risks to civil liberties. Current AI is not yet secure or reliable enough to give machines the power of life and death.

Result February 2026 Hegseth issued its ultimatum with a deadline February 27, Anthropic did not compromise. The Pentagon labeled it a supply chain risk a label usually reserved for Chinese or Russian companies, never before applied to any American company. Trump ordered all federal agencies to immediately stop using Anthropic, prohibiting military contractors from trading with it, with a six month phase out plan.

OpenAI signed an agreement shortly afterward and followed suit.

Anthropic sued the US government currently in trial, with over 30 experts from OpenAI and Google DeepMind supporting them. Dario Amodei publicly stated We would rather not cooperate than violate the principles. This is not a military blockade like in action movies.

There are no tanks surrounding Anthropic headquarters or anything like that. This is a contract embargo plus blacklisting a form of economic isolation to force the company to bend ethically. Many call this a classic conflict between AI safety and national security.

Anthropic continues to operate normally with civilian businesses, but has lost major government contracts and has been labeled a national security risk an unprecedented action for a US company.

In short Anthropic is maintaining its ethics first stance although they still allow the military to use Claude in many other cases, while the US government wants no limitations. This battle is still raging, with Anthropic filing a countersuit and receiving strong support from the AI community.

The truth is far more serious it shows the US government is willing to use its power to force the AI company to abandon its safeguards. What do you think Should Anthropic back down or stand firm