The conflict between Anthropic and the US government over the military use of the company's AI technology has escalated in recent days. President Donald Trump issued a ban on all federal agencies from using Anthropic's technologies and services. On his platform Truth Social, Trump formulated the government's tough stance very clearly: They do not need the technology, they do not want it, and they will not do any business with the company anymore.
This was flanked by Secretary of Defense Pete Hegseth, who, according to a post on X, instructed the Department of Defense to officially classify Anthropic as a "security risk to the national security supply chain." Hegseth wrote that he had instructed the US Department of War to designate Anthropic as a national security risk for supply chains:
"[…] I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.[…]"
This drastic step, which is normally only applied to foreign enemies, prohibits all military contractors and suppliers from working with Anthropic from now on. Agencies and companies have been given a six-month deadline to let existing contracts expire.
The Incompatible Positions of Anthropic and the Pentagon
At the center of the dispute are fundamentally different views on the ethical guardrails for the military use of Artificial Intelligence. In contract negotiations, the Pentagon insists unyieldingly on the position that it may use the acquired AI systems for absolutely "any lawful purpose."
However, this demand crosses Anthropic's red lines. Anthropic CEO Dario Amodei emphasized in a blog post his conviction that AI is of existential importance for the defense of the United States and other democracies. However, he refused to make the company's systems available for domestic surveillance or for direct use in lethal, autonomous weapons. Amodei argued that mass domestic surveillance is incompatible with democratic values. Furthermore, he pointed out that today's state-of-the-art AI systems are simply not yet reliable enough to control fully autonomous weapon systems.
Anthropic responded immediately to the government's far-reaching sanctions by threatening legal action. The company stated that it considers the classification as a security risk legally questionable and sees it as a dangerous precedent for any American company negotiating with the government.
The Contract Conclusion with OpenAI
Just hours after the Trump administration's public statements, OpenAI CEO Sam Altman announced that his company had reached an agreement with the Department of Defense. According to this, OpenAI's models are to be deployed in the department's classified networks. Altman emphasized in a post on X that the Pentagon had shown openness to security issues during the talks.
According to Altman, the ministry agreed to OpenAI's two most important security principles: the ban on domestic mass surveillance and the obligation for human responsibility in the use of force, which includes autonomous weapon systems. OpenAI announced that it would develop appropriate safeguards to ensure the correct behavior of the models.
But does this mean that OpenAI is not going beyond what Anthropic is willing to provide? The US Department of War sees things differently, at least to some extent. Under Secretary of Defense Jeremy Lewin wrote on X that the wording "all lawful use" was central. The ministry and OpenAI had agreed on this. This involves various legal authorities and security mechanisms – something that had also been offered to Anthropic, but which the company had rejected.
While OpenAI was willing to rely on the assessment of government authorities when weighing up corresponding questions, Anthropic was not prepared to do so and had the assessment carried out by the CEO.
It strongly appears that OpenAI is putting the evaluation of the use of its technology into the hands of the US government so that it can then refer to it. It is at least doubtful that the examination here will be particularly critical.
Conclusion and Assessment
With its actions, the US government underscores that it is ready to take drastic measures against domestic companies if they oppose the far-reaching demands of the military. The classification of Anthropic as a "security risk to the supply chain" shows an uncompromising stance by the Pentagon. As Anthropic aptly warns, this punishment sets a dangerous precedent for all American companies that want to negotiate contracts with the government in the future while drawing their own ethical boundaries.
The conflict is not limited to the companies involved but is causing resistance in the technology sector. Notably, the government's hard line is leading to cross-company solidarity: An open protest letter to the Pentagon and Congress was signed by executives from the tech industry as well as eleven OpenAI employees.
Your maintenance expert in data centers
With decades of experience, we know what matters when maintaining your data center hardware. Benefit not only from our experience but also from our excellent prices. Get a non-binding offer and compare for yourself.
More Articles
Users punish ChatGPT: Uninstallations surge by nearly 300 percent following military deal
The number of uninstalls of the ChatGPT app in the US rose by 295% on Saturday compared to the
AI progress: The case of Peter Steinberger joining OpenAI shows weaknesses Europe is facing
The move of Peter Steinberger, the founder of OpenClaw, to OpenAI drastically demonstrates what is lacking in Europe as
Claude Opus 4.6 is here: a new level for Agentic AI and Reasoning
With the introduction of Claude Opus 4.6 on February 5, 2026, Anthropic has unveiled its most powerful model to date.