Two sworn declarations. Filed late on a Friday afternoon. Aimed squarely at the Pentagon’s claim that Anthropic, the AI company behind Claude, poses an “unacceptable risk to national security.”

The declarations, submitted to a California federal court on March 20 by Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, its head of public sector, do not read like corporate boilerplate. They read like a point-by-point dismantling of the government’s case — and they land four days before a hearing that could reshape how Washington deals with Silicon Valley.

The Email That Shouldn’t Exist

The most damaging detail is a date. On March 3, the Pentagon formally designated Anthropic a supply-chain risk, barring the company from government contracts and — critically — from commercial relationships with any Pentagon contractor or supplier. The next day, March 4, Under Secretary Michael emailed CEO Dario Amodei to say the two sides were “very close” on the very issues the government now cites as proof Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans.

One day apart. One designation says the company is too dangerous to do business with. One email says the deal is nearly done.

By March 6, Michael posted on X that there was “no active Department of War negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of renewed talks. The whiplash is not subtle.

What “National Security Risk” Actually Means Here

The Pentagon’s argument rests on two claims. First, that Anthropic demanded approval authority over military operations — effectively a veto on how the Defense Department uses its AI. Second, that the company could unilaterally disable or alter its technology during critical military operations if its internal safety policies were breached.

Heck’s declaration addresses the first claim directly: at no point during negotiations did she or any Anthropic employee demand that kind of role. Anthropic’s actual position, according to the filings, was narrower — it wanted contractual language excluding three specific use cases: autonomous weapons, mass domestic surveillance, and high-stakes automated decisions without human oversight.

Those are, notably, the same three restrictions OpenAI secured in its own Pentagon contract, signed hours after Anthropic’s blacklisting. The difference was not in the red lines. It was in enforcement. Anthropic wanted the restrictions written into contract language. OpenAI was willing to rely on existing law.

One AI Company Leans In, Another Draws a Line

The timing of this confrontation is striking. On the same day Anthropic’s declarations hit the docket, Deputy Defense Secretary Steve Feinberg signed a memo making Palantir’s Maven AI system an official program of record — a formal, long-term commitment to weapons-targeting AI across the U.S. military.

Palantir has spent years positioning itself as the defense establishment’s most eager AI partner. Maven, originally a Pentagon project for processing drone surveillance footage, is now embedded across multiple intelligence functions with a contract ceiling exceeding $1.3 billion. The irony: Maven itself uses Claude, Anthropic’s model, as one of its underlying AI tools. The Pentagon is simultaneously blacklisting the company whose technology powers its flagship AI weapons system.

The divergence is the story. Two models for how AI companies relate to state power are emerging. Palantir builds for the mission, full stop. Anthropic insists on contractual guardrails. Washington, under this administration, has made clear which approach it prefers.

The Legal Stakes

Tuesday’s hearing before Judge Rita Lin in San Francisco is the first judicial test of whether the 2011 supply-chain risk statute — designed to keep foreign adversaries out of defense procurement — can be used against a domestic American company. More than 150 retired federal judges have filed an amicus brief warning the designation sets a dangerous precedent: that any administration could punish technology vendors for declining contract terms.

Anthropic has filed parallel lawsuits in California and the D.C. Circuit. The company faces a 180-day window during which federal agencies must phase out its tools. The blacklisting also threatens relationships with Amazon and Google, both major cloud providers and Anthropic investors, whose defense work could be complicated by association.

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect national security,” Amodei said when the suits were filed on March 9.

The government’s 40-page response, filed March 17, frames Anthropic’s refusal as commercial conduct rather than protected speech. The sworn declarations filed Friday argue the government’s own emails prove otherwise.

Judge Lin will hear arguments Tuesday. The compact between Washington and Silicon Valley may not survive the week.

Sources