Artificial IntelligenceNews

U.S. Department of Defense Considers Ending Partnership With AI Firm Anthropic Over Usage Restrictions

The U.S. Pentagon is reviewing its relationship with the artificial intelligence company Anthropic and may sever ties after months of persistent disagreements over how its AI technology can be used by the military. The dispute has centered on the terms under which the U.S. Defense Department’s armed forces can deploy Anthropic’s AI models, particularly the Claude family of systems.

Officials within the U.S. Department of Defense, led by Secretary of Defense Pete Hegseth, have grown frustrated with Anthropic’s insistence on usage limits that would bar its AI from being employed for certain military applications. Anthropic’s policies would prohibit the use of Claude for mass domestic surveillance or fully autonomous weapons systems — restrictions it says are designed to prevent misuse and protect civil liberties.

Pentagon negotiators have pressed not only Anthropic but also other AI labs — including OpenAI, Google, and xAI — to agree that their tools can be used for “all lawful purposes,” encompassing sensitive areas such as weapons development, battlefield operations, and intelligence missions without contractual constraints. While some companies have shown greater flexibility in talks, Anthropic’s stance has repeatedly been identified as a sticking point.

According to officials familiar with the matter, the Pentagon is considering designating Anthropic a “supply chain risk,” a status typically reserved for adversaries or foreign entities deemed a national security threat. If implemented, this designation could require any contractor seeking Pentagon business to certify that it does not use Anthropic’s technology, effectively forcing partners to sever ties.

Anthropic’s AI model Claude is currently integrated into the Pentagon’s classified computing networks — reportedly used in recent operations — and is highly regarded for its analytical capabilities. That entrenched role complicates efforts to transition to alternative systems in the event of a rupture.

The Pentagon’s chief spokesperson said the department’s relationship with Anthropic is under review as part of broader efforts to ensure that partners are prepared to support U.S. defense objectives. Meanwhile, an Anthropic representative said the company remains committed to productive discussions with the Defense Department and to supporting national security objectives within its ethical guidelines.

This conflict illustrates growing tensions between national security priorities and corporate AI governance, as powerful AI tools become integrated into defence operations and debates continue over where ethical limits should be drawn.

Show More

Related Articles

Back to top button