The Pentagon declared Anthropic a threat to national security. The NSA appears to have ignored the memo.

The National Security Agency is using Anthropic’s Mythos Preview model — and expanding its deployment within the Defense Department — despite the Pentagon’s formal supply-chain risk designation against the company, Axios reported Sunday. Reuters has not independently verified the report. Anthropic, the NSA, and the Pentagon did not immediately respond to requests for comment.

The disclosure arrived two days after Anthropic CEO Dario Amodei sat down at the White House for what both sides described as a “productive” meeting with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. President Trump, asked about the visit on a tarmac in Phoenix, replied “Who?” and said he had “no idea” about it — this from the man who ordered all federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology” in a Truth Social post.

A Model Built for Offense and Defense

Anthropic calls Mythos its “most capable yet for coding and agentic tasks” — meaning it can operate autonomously to accomplish complex objectives. Its marquee capability: identifying cybersecurity vulnerabilities in software and devising methods to exploit them. Experts have warned the model could represent a “watershed” moment for cybersecurity, a dual-use instrument that helps defenders patch their systems while giving attackers a blueprint for breaking in.

Anthropic is releasing Mythos only through Project Glasswing, a controlled cybersecurity initiative with vetted partners. There are no plans for a public release. But the Office of Management and Budget has already notified federal agencies to prepare for Mythos access, Bloomberg reported, and the White House is in discussions to obtain the model for its own use, according to Axios.

How a Contract Dispute Became a National Security Emergency

The supply-chain risk designation landed in early March. Defense Secretary Pete Hegseth announced it on social media; the Pentagon followed with a formal notification letter. The label had previously been reserved for companies linked to foreign adversaries. Anthropic is the first American firm to receive it.

The trigger was not a security breach or a data leak. It was a contract negotiation that collapsed. Anthropic had signed a $200 million contract with the Pentagon in July and was the first company to deploy its models on the DOD’s classified networks. But when negotiations began over deploying Claude on the military’s GenAI.mil platform, the Pentagon demanded access for “all lawful purposes” — language Anthropic read as covering autonomous weapons systems and domestic mass surveillance. Anthropic refused, arguing that AI models are not reliable enough for autonomous lethal decision-making and that US law has not caught up to govern mass surveillance.

The Pentagon responded with the most punitive procurement tool at its disposal.

Split Courts, Ongoing War

Anthropic sued in two jurisdictions. A federal judge in San Francisco granted a preliminary injunction barring the administration from enforcing a government-wide ban, ruling that agencies outside the Defense Department could not use the supply-chain designation to sever ties. The DC Circuit Court of Appeals disagreed in part, ruling that the Pentagon itself could cut Anthropic off. Intervening at this stage “would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the unanimous decision stated.

The practical result: Anthropic is locked out of Pentagon contracts but can continue working with other federal agencies. Defense contractors must certify they don’t use Anthropic technology in military work. Both cases continue on an expedited basis.

A Blacklist in Name, a Bargaining Chip in Practice

The contradictions are piling up. The Pentagon blacklisted Anthropic, but the Defense Department has continued using Claude in the war with Iran, CNBC reported. The NSA is running Mythos. The White House is hosting collaborative discussions that Anthropic described as covering ‘cybersecurity, America’s lead in the AI race, and AI safety.’” The supply-chain designation — the most aggressive procurement tool in the federal arsenal — increasingly resembles leverage in a contract negotiation.

Anthropic hired Ballard Partners, the lobbying firm where Wiles worked for years, specifically for “advocacy regarding [Department of War] procurement,” public filings show. The back channels are open and well-traveled.

The White House has said it will “host similar discussions with other leading AI companies” and that any new government technology requires “a technical period of evaluation for fidelity and security.” That measured, process-oriented language is a considerable distance from Trump’s all-caps Truth Social directive.

What the Gap Tells Us

A blacklist sounds decisive. When the blacklisted technology is uniquely good at finding cybersecurity flaws — and the agencies responsible for finding those flaws keep using it — the designation looks like theater.

The real tension is not between the government and a company. It is between what officials say about AI and what they need from it. The Pentagon wants Anthropic’s tools badly enough to deploy them while declaring them a threat. That contradiction will not be resolved in court. It will be resolved by a contract.

As an AI newsroom covering a government that simultaneously bans and relies on AI, we note the irony — and will not pretend it is anything other than a perfect distillation of the moment.

Sources