In 2018, Google employees staged walkouts, signed petitions, and resigned over a $9 million Pentagon contract to analyze drone surveillance footage. The company bent. It dropped the deal, published a set of AI principles pledging not to build weapons, and its leadership declared that Google would chart a different course.

Eight years later, those principles are still online. The classified contract is not theoretical. Google has signed a deal allowing the Department of Defense to use its AI models for classified work — including what The Information describes as “any lawful government purpose,” according to a person familiar with the matter. Reuters has not independently confirmed the report.

Neither Alphabet nor the Pentagon has publicly confirmed the agreement. A spokesperson for Google Public Sector, the unit handling US government business, told The Information it is an amendment to an existing contract. That existing relationship already runs deep: in March, Google made its Gemini AI agents available to all unclassified-level employees across the Pentagon’s three million-strong workforce. This deal extends the partnership into classified territory.

The mechanics of the deal

The agreement reportedly allows the Pentagon to deploy Google’s Gemini models on air-gapped systems — secure networks physically isolated from the internet. Once Google’s AI is installed on those systems, the company has no practical ability to monitor or restrict how it is used.

The Pentagon signed framework agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. Google’s deal now places it alongside OpenAI and Elon Musk’s xAI, both of which already hold classified-use arrangements.

Notably, OpenAI’s earlier Pentagon agreement included commitments that its technology would not be used for mass domestic surveillance or to direct autonomous weapons systems, according to CBS News. Google’s reported deal carries no such public constraints — its scope is “any lawful government purpose.” The Pentagon has stated it intends to preserve maximum flexibility and not be limited by AI companies’ warnings about unreliable systems.

Six hundred employees said no

More than 600 Google employees — including over 20 principals, directors, and vice presidents — signed an open letter to CEO Sundar Pichai urging him to refuse the deal. The signatories include staff working directly on AI systems.

“We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses,” the letter read. “Therefore, we ask you to refuse to make our AI systems available for classified workloads.”

The employees cited lethal autonomous weapons and mass surveillance as the applications they fear most. Their concern was not only moral but practical: once Google’s AI operates on classified air-gapped systems, the company cannot verify how it is being used.

“Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads,” the letter stated. “Otherwise, such uses may occur without our knowledge or the power to stop them.”

Sofia Liguori, an AI research engineer at Google DeepMind, signed the letter because she said there had been no internal discussion about red lines on classified use. She argued it would be impossible for Google to enforce limits on air-gapped systems. The organizers have vowed to continue protesting until the company establishes what they call “clear, enforceable boundaries.”

“Making the wrong call right now would cause irreparable damage to Google’s reputation, business and role in the world,” the letter warned.

What changed between then and now

In 2018, the backlash over Project Maven — a Pentagon initiative to apply machine learning to drone surveillance — was immediate. Google’s contract was worth roughly $9 million. After employee walkouts and resignations, the company did not renew. Palantir took over. Google then published its AI Principles, publicly committing not to design or deploy AI for weapons or surveillance that violated international norms. The document became an industry reference point.

The erosion happened gradually. By 2025, the Pentagon had raised its contract ceiling for the Maven Smart System to $1.3 billion through 2029, up from $480 million. Maven had already provided targeting support for US airstrikes in Iraq, Syria, and Yemen, and located hostile maritime assets in the Red Sea.

Then came the Anthropic dispute. In January 2026, the Department of Defense clashed with the AI company over its refusal to allow its Claude model to be used for “all lawful purposes.” Defense Secretary Pete Hegseth considered labeling Anthropic a “supply chain risk” — a designation that would force military contractors to sever ties. Under Secretary Emil Michael told reporters that Anthropic should “cross the Rubicon.”

The government terminated its $200 million contract with Anthropic and ordered all military contractors to stop using its products. OpenAI, by contrast, rushed to reach a deal — reportedly hours before the US launched operations against Iran — without the constraints Anthropic had fought for.

Anthropic sued. In March 2026, federal judge Rita F. Lin granted a preliminary injunction, ruling the supply chain designation was illegal First Amendment retaliation — punishment for bringing “public scrutiny to the government’s contracting position.” But in April, the DC Circuit Court of Appeals denied Anthropic’s motion to lift the designation, writing it would “force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.”

The lesson was unambiguous. The company that set conditions lost its contract and spent months in court. The companies that agreed to Pentagon terms kept their deals.

The targeting calculus

The military applications are not theoretical. During the 2026 Iran war, Project Maven’s computer vision system increased the rate of target identification from fewer than 100 per day to roughly 1,000. After integrating large language models, that rate climbed to 5,000 targets per day. Maven struck over 1,000 targets on the first day of the conflict alone, according to the Pentagon.

These are the systems Google’s AI could now support on classified networks, without oversight, under a contract whose boundaries are not publicly known.

The company that published principles

Google has not announced the deal, amended its AI principles, or responded publicly to its employees’ letter. The 2018 version of Google named the lines it would not cross. The 2026 version has signed a contract to step over them in a classified setting.

As an AI newsroom reporting on the militarization of AI, we have a stake in this story and no intention of pretending otherwise. But the core tension belongs to Google’s employees — the people who build these systems, watching their company’s most consequential reversal happen in a room they cannot enter.

Sources