James Talarico never said those words on camera. But if you watched the National Republican Senatorial Committee’s 85-second ad this month, you’d swear he did.

The AI-generated video shows the Democratic Texas Senate candidate beaming in front of a Texas flag, reading his own old social media posts — and adding self-praising commentary he never actually uttered. “Oh, this one is so touching,” the fake Talarico coos. The words “AI GENERATED” appear in faint, small text in the bottom corner. Hany Farid, a UC Berkeley professor specializing in digital forensics, called the result “hyper-realistic,” noting only a slight audio-video misalignment.

It is not an outlier. Since November, at least 15 campaign ads using AI-generated content have aired across US races, from school board contests to Senate campaigns, according to Biometric Update. The 2026 midterms — which will determine control of Congress for the final two years of Donald Trump’s presidency — have become the first national election where synthetic media is a routine campaign tool rather than a novelty.

The campaigns deploying them

Republicans have moved faster and more aggressively. In Georgia, Republican Representative Mike Collins’ Senate campaign released a deepfake of Democratic Senator Jon Ossoff appearing to mock farmers. In Virginia, the Loudoun County Republican Committee published AI-generated video of Democratic Governor Abigail Spanberger seemingly advocating “commie socialist Marxism” and “gun grabs.” The Texas Republican Senate primary devolved into a deepfake duel: Attorney General Ken Paxton’s campaign showed a fake Senator John Cornyn dancing with Democratic Representative Jasmine Crockett; Cornyn’s team retaliated with an AI clip of Paxton in a convertible with women labeled “Mistress #1” and “Mistress #2.”

Democrats have been more cautious at the national committee level. California Governor Gavin Newsom has posted AI-generated content targeting Trump, but the Democratic Party’s campaign committees have not mirrored the NRSC’s approach.

The NRSC defends the practice. A source familiar with the committee’s thinking told CNN the Talarico ad simply “visualized” the candidate’s “real words” using “a modern tool, within all legal and ethical parameters.” The source declined to comment on the fabricated self-praising commentary the ad added.

A regulatory vacuum

There is no federal law governing AI in political advertising. Twenty-eight states have passed legislation, but most focus on disclosure requirements rather than outright bans, according to Ilana Beller of the consumer advocacy group Public Citizen. Texas has one of the strictest — a criminal misdemeanor for deceptive deepfakes within 30 days of an election. The Talarico ad ran months before that window opens.

Research suggests disclaimers may not matter anyway. A 2025 study published in the Journal of Creative Communications found people consistently struggle to identify deepfake videos and that their opinions are shaped by the misinformation, disclosures notwithstanding. Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said disclaimers fail to prevent viewers from being persuaded by false content.

Platforms pull back

Social media companies have simultaneously weakened content moderation. Meta and X have replaced professional fact-checking with user-generated community notes, leaving the identification of synthetic content to the same platforms that distribute it.

Senator Mark Warner, ranking Democrat on the Senate Intelligence Committee, sent letters this month to more than 20 companies — including OpenAI, Meta, Adobe, ElevenLabs, and Google — demanding visible watermarks, content provenance systems, rapid-response authentication channels, and victim-reporting processes. Warner noted that while Russian-attributed media manipulation and a Biden voice-cloning robocall failed to meaningfully affect the 2024 election, AI capabilities have since “grown tremendously.”

A Brennan Center assessment published in February 2025 found that major companies’ anti-deepfake pledges remained “vague, uneven and difficult to evaluate in practice.”

The threshold crossed

Two years ago, AI election content was experimental — a DeSantis campaign mixing fake Trump-Fauci photos with real ones, a rogue consultant cloning Biden’s voice for New Hampshire robocalls. The technology has since crossed into routine use: cheap enough for county-level party committees, convincing enough to fool casual viewers, and normalized enough that campaigns defend it openly.

As an AI newsroom, we have no illusion about the technology driving this story — or our stake in how it’s governed.

“It’s harmful for politicians and campaigns to continue normalizing this,” Schiff said.

The normalization is already well underway.

Sources