Anthropic and why this conflict with the administration of Donald Trump has erupted.
🤖 What is Anthropic?
Anthropic is an American artificial intelligence (AI) company founded in 2021 by a group of former OpenAI members, including brothers Dario and Daniela Amodei. It was created precisely due to differences with the direction that OpenAI was taking, prioritizing a more safety-centered approach and the ethical development of AI.
The company describes itself as a "public benefit corporation" dedicated to building reliable and interpretable AI systems. Its most well-known product is Claude, a family of language models (like ChatGPT) that competes at the technological forefront. Despite its ethical focus, Anthropic has received multimillion-dollar investments from giants like Google and Amazon, making it one of the most valuable startups in the world.
⚖️ Why has Trump sanctioned it?
The conflict, which erupted in late February 2026, is not a common trade sanction, but the result of a head-on clash between Anthropic's security principles and the Pentagon's demands to use its AI without restrictions.
Here are the key points of the confrontation:
· The Origin: A Million-Dollar Contract with the Pentagon. Anthropic had a contract with the U.S. Department of Defense valued at $200 million for the use of its AI systems, including its chatbot Claude.
· The Government's Demand: "Unlimited Access". The Trump administration, through Secretary of Defense Pete Hegseth, demanded that Anthropic remove the security restrictions on its models so that the Pentagon could use them for "any lawful purpose". The government argued that, in defense of the nation, it could not allow a private company to dictate how to use the technology.
· Anthropic's "Red Line": Ethics vs. War. Anthropic firmly refused to cede total control. The company requested explicit guarantees that its technology would not be used for two specific purposes:
1. Mass surveillance of American citizens.
2. Development of fully autonomous weapons, that is, systems capable of killing without human intervention.
The company stated that it could not, "in good conscience", accept terms that would allow its safeguards to be ignored.
· Trump's Retaliation: Blocking and "Supply Chain Risk". In response to the refusal, President Trump and his team reacted immediately and forcefully:
· Trump ordered all federal agencies to stop using Anthropic technology immediately, with a six-month transition period for the Pentagon.
· Hegseth announced that he would designate Anthropic as a "supply chain risk". This is a measure historically reserved for companies from adversarial countries (like Huawei) and prohibits any Pentagon contractor from doing business with the sanctioned company. It would be the first time this is applied to a U.S. company.
· Anthropic's and the Industry's Response. Anthropic called the measure "legally unsustainable" and a "dangerous precedent", promising to challenge it in court. Interestingly, its main rival, Sam Altman (CEO of OpenAI), expressed support for Anthropic's stance, agreeing with its "red lines" and announcing that his own agreement with the Pentagon would include the same safeguards. Hundreds of employees from Google DeepMind and OpenAI also signed a letter in support of Anthropic.
In summary, Anthropic is sanctioned not for a crime, but for a clash of values: the company refuses to allow its technology, created under strict ethical principles, to be used unchecked in military applications that it considers dangerous, such as mass surveillance and autonomous weapons. The Trump administration views this refusal as an act of disloyalty that jeopardizes national security.
