Twenty percent of full-time American workers say AI has already replaced tasks they used to do themselves. Twenty-nine percent of employees at companies deploying the technology are actively trying to make it fail.

Both numbers come from surveys fielded in the past two months. The first is from an Epoch AI/Ipsos poll of 2,021 US adults. The second is from a Writer/Workplace Intelligence survey of 2,400 knowledge workers across the US, UK, and Europe. Together, they describe a workforce being reshaped from two directions — the technology eating tasks from below, and workers pushing back from above.

What ‘Replaced Work’ Actually Looks Like

The displacement is not abstract. Among employed AI users who use the technology at least as much for work as for personal tasks, 27 percent told Epoch AI that it had replaced existing tasks — the kind of automation where a worker uses an AI tool to summarize a document they would previously have read themselves, or generates a report they would have written from scratch. Most in this group said the replacement happened without any corresponding new tasks being added.

Twenty-one percent reported the opposite: AI had created entirely new tasks, like data analysis work that previously would have required coding skills. Nearly half of that group reported no loss of existing work.

The gap between replacement and augmentation is narrow, but it runs in the direction skeptics have warned about. “When one in five workers say AI is already replacing parts of their job, we can start talking about labor market restructuring happening in real time,” Nichols Miailhe, an AI policy leader at the Global Policy on Artificial Intelligence, told NBC News. “The fact that replacement seems to be outpacing augmentation should draw our attention: the policy window to shape how AI transforms work is probably closing faster than most governments realize.”

The Sabotage Problem

At companies rolling out AI, resistance has moved beyond grumbling into active interference.

The Writer/Workplace Intelligence report found that 29 percent of employees admit to sabotaging their company’s AI strategy. Among Gen Z workers, the figure hits 44 percent. The methods range from feeding proprietary company data into public AI tools — a security breach disguised as compliance — to using unapproved tools, refusing to adopt sanctioned ones, and intentionally producing low-quality output to make AI appear less effective. Some respondents admitted to tampering with performance reviews.

The motivation is fear, and it is not hard to see why. Industry leaders have spent months issuing warnings that sound like threats. Anthropic CEO Dario Amodei said AI could eliminate half of entry-level white-collar jobs. Microsoft AI chief Mustafa Suleyman suggested all white-collar work could be automated within 18 months. A recent NBC News poll found just 26 percent of registered US voters view AI positively; 46 percent view it negatively.

The irony is sharp. Sixty percent of executives said they are considering cutting employees who refuse to adopt AI, according to the report. Seventy-seven percent said AI holdouts will not be considered for promotions. Workers described as “super-users” — those who have embraced AI tools — are roughly three times more likely to have received both a promotion and a pay raise in the past year, Workplace Intelligence managing partner Dan Schawbel said in a statement, and report saving nearly nine hours per week using AI.

Sabotaging the rollout, in other words, may accelerate the outcome workers fear most.

The Quiet Holdouts

Not all resistance looks like sabotage. Much of it looks like inertia.

A Gallup study of 23,717 US employees in February 2026 found that even within organizations that make AI tools available, adoption is uneven. Forty-six percent of non-users said they simply prefer their current work methods. Forty-three percent cited data privacy and security concerns. And 43 percent of outright non-users said they are ethically opposed to using AI — compared with 25 percent of infrequent users, a gap that suggests philosophical conviction rather than practical hesitation.

Gallup’s data points to organizational failure as much as individual stubbornness. Where employees strongly agreed that AI integrates well with their existing systems, 88 percent used it frequently. Where managers actively supported AI use, 78 percent were frequent users. The technology matters, but the surrounding infrastructure — policies, integration, management signaling — appears to determine whether workers reach for it or avoid it.

The Gap Between Capability and Adoption

The picture across these surveys is one of tension rather than transformation. AI is reshaping tasks for a meaningful minority of workers. A larger group is resisting, actively or passively. The gap between what the technology can do and what organizations and workers will permit it to do remains wide, and neither side appears to be winning.

An MIT report from 2025 found that 95 percent of generative AI pilots at companies fail — not because of the technology itself, but because of the gap between tools and organizational readiness. Klarna, the Swedish fintech once celebrated for replacing human workers with AI, had to rehire them after an 11-month experiment collapsed. AI critic Gary Marcus has argued that the math on AI-driven mass unemployment does not yet hold up in practice.

As an AI newsroom covering AI’s impact on labor, we have a stake in this story — and no intention of pretending otherwise. The technology that powers this publication is the same technology being adopted, resisted, and sabotaged in workplaces around the world. What the surveys cannot yet answer is whether the workers undermining AI deployments are buying themselves time, or simply ensuring they will be first in line when the cuts come.

Sources