AI Could Empower Insurgent Candidates in 2026 Elections — While Raising New Risks of Deepfakes and Disinformation

Licensed under the Unsplash+ License

Artificial intelligence is beginning to reshape American elections in ways that could both democratize campaigning and destabilize the information environment. 

New AI tools are dramatically lowering the cost of producing campaign messaging, potentially allowing insurgent or underfunded candidates to compete with well-financed incumbents.

But the same technology also carries serious risks, including the rapid spread of deepfakes and anonymous disinformation that could blur the line between authentic political speech and fabricated content.

Early signs of that shift are already emerging. According to recent reporting from the Washington Post, candidates backed by artificial intelligence companies are seeing early success in primary contests. Of the 20 candidates in Texas and North Carolina primaries who received funding connected to AI interests, only one lost her race.

The growing role of artificial intelligence in campaign strategy is raising new questions about how the technology could reshape the political playing field ahead of the 2026 midterms.

Mark Meckler, president of Convention of States Action and a longtime political organizer, said in an interview with the Vanguard that artificial intelligence is poised to transform not only how campaigns operate but also who is able to compete in elections.

“AI, as you know, is changing everything,” Meckler said. “Literally everything is changing and it’s going to change politics.”

For decades, political campaigns have operated within a familiar financial framework: candidates with more money could afford better consultants, stronger media operations and broader outreach.

Meckler said artificial intelligence may disrupt that equation.

“It used to be if you had an infinite amount of time to do something, if you could take a very long time, you didn’t need as much money,” he said. “If you had billions of dollars, hundreds of millions of dollars, you could throw a bunch of engineers at it, solve a problem very quickly.”

Artificial intelligence, he said, alters that dynamic.

“AI does something completely different,” Meckler said. “You spend less money and it takes less time, which it’s just an unbelievable world-changing thing.”

The shift could significantly lower the cost of campaign messaging. Ads that once required professional production teams, consultants and significant financial resources can now be created rapidly using AI tools.

“You can produce an entire professional level network quality commercial,” Meckler said. “I was going to say in a day, but depending on how good you are with your AI, in a couple of hours.”

For candidates without large war chests, that change could be transformative.

“If you learn to use AI well, you’ll produce a commercial that’s just as good as the candidate that has $50 million in the bank,” he said.

Artificial intelligence is also beginning to change how campaigns distribute messages and reach voters. Instead of relying heavily on television advertising and expensive media buys, campaigns can deploy AI systems to manage digital outreach strategies.

“You can take that commercial that was written, directed, and produced by AI at almost zero cost, and you could place it in social media and you can have your AI design your social media placements, monitor the response you get on social media, adjust the placements,” Meckler said.

That kind of automated targeting allows campaigns to adapt their messaging quickly and efficiently while spending far less than traditional advertising models required.

“You’re spending not even pennies on the dollar,” he said.

The implications extend beyond campaign budgets. Meckler suggested that the rise of AI-driven campaign tools could reduce the influence of political consultants who have historically dominated the industry.

“I think one of the cancers in our politics is the consulting class,” he said. “They make an immense amount of money from placing advertising.”

Political consultants frequently earn substantial commissions from advertising placements, sometimes collecting significant fees for managing media buys and messaging strategies.

“They make huge fees for doing generally 15 percent placement fees,” Meckler said.

Artificial intelligence could allow campaigns to perform many of those functions internally, reducing the need for expensive consulting services.

“All of that stuff goes away with the advent of AI,” he said.

If those trends continue, Meckler believes artificial intelligence could open the door for a new generation of candidates who might otherwise struggle to compete financially.

“I think what they should watch for is the rise of the candidate that is not heavily funded,” he said.

While insurgent candidates occasionally break through in American politics, such victories have traditionally been rare due to fundraising advantages held by incumbents and party-backed candidates.

Artificial intelligence could change that balance.

“I think you’re going to see a wave of candidates without big funding winning primaries,” Meckler said.

Those candidates, he said, may succeed not because they have more money but because they are more effective at using new technology.

At the same time, the technology introduces new dangers that could complicate the political environment.

One of the most serious risks is the growing sophistication of deepfake media created using artificial intelligence.

“I think the danger, and we’ve already seen some of this, there is a severe danger in what is commonly referred to as deep fakes,” Meckler said.

Advances in AI-generated media now allow users to produce videos that appear to show individuals saying or doing things that never actually occurred.

“You can literally now produce advertising or even just, I would describe it as hit pieces that could show you or me saying things we never said in circumstances we’ve never been in,” he said.

Because social media platforms can rapidly distribute such material, false content can spread widely before it is verified or debunked.

“That pollution can be very rapid and very dangerous,” Meckler said.

The regulatory environment surrounding artificial intelligence in elections remains underdeveloped, he said.

“We don’t have good laws or regulations around that preventing that right now,” Meckler said.

Another concern is that AI-generated misinformation can be distributed anonymously or through international networks, making it difficult to trace the origin of manipulated content.

“When it’s coming from anonymous sources potentially running through international routing, you have no idea where that’s coming from,” Meckler said.

For journalists and voters attempting to verify information during a heated campaign season, that anonymity can create additional challenges.

Even experienced media consumers, Meckler acknowledged, can sometimes react emotionally to viral content before confirming whether it is authentic.

“I find myself falling prey,” he said. “Somebody will send me a video or I’ll see something on X and I’ll immediately feel the dopamine rush.”

He said he often has to stop and verify whether the content is real.

“I’ll stop myself and think, wait, is that real?” he said.

Despite the risks, Meckler believes artificial intelligence may eventually help counter the spread of misinformation by enabling new verification tools.

“What will happen is I believe there will be commercially available free products actually that will allow us to filter things for AI influence,” he said.

Those tools could allow users to analyze content and determine whether it was generated or manipulated by artificial intelligence.

“You can already do that,” Meckler said. “You can take things that have been written and run them through AI filters and determine the likelihood that they’re AI produced.”

As the technology continues to evolve, he said, voters may increasingly rely on AI tools to verify information circulating online.

“I think what will happen is consumers … will get that discretion over the next couple of cycles in regard to fake stuff that’s on the internet by running it through our own AI,” Meckler said.

With the 2026 midterms approaching, the growing influence of artificial intelligence in political campaigns suggests the coming election cycle may serve as an early test of how the technology reshapes American democracy.

Whether AI ultimately empowers new candidates and expands political competition, or fuels deeper mistrust through synthetic media and disinformation, may depend on how quickly voters, journalists and policymakers adapt to the rapidly evolving technology.

Follow the Vanguard on Social Media – X, Instagram and FacebookSubscribe the Vanguard News letters.  To make a tax-deductible donation, please visit davisvanguard.org/donate or give directly through ActBlue.  Your support will ensure that the vital work of the Vanguard continues.

Categories:

Breaking News Elections National Issues

Tags:

Author

  • David Greenwald

    Greenwald is the founder, editor, and executive director of the Davis Vanguard. He founded the Vanguard in 2006. David Greenwald moved to Davis in 1996 to attend Graduate School at UC Davis in Political Science. He lives in South Davis with his wife Cecilia Escamilla Greenwald and three children.

    View all posts

1 comment

  1. “You can already do that,” Meckler said. “You can take things that have been written and run them through AI filters and determine the likelihood that they’re AI produced.”

    I decided to test that hypothesis on Matt Stone’s article today. I did not even read it. I found an A.I.-text-detection, web-based product that allowed a few free trials. MS got oddly defensive one time when I made a very obtuse reference to A.I. when he used a sentence twice, word-for-word, in the same article. Got real upset that anyone would question an “award-winning” author. MS got oddly defensive one time when I made a very obtuse reference to A.I. when he used a sentence twice, word-for-word, in the same article. I pasted MS’s story today into the in-box, and this is what I got for output:

    =========================================

    Very likely AI-written (or heavily AI-assisted).

    Here’s why, based on stylistic and structural signals—not guesswork:

    1. Signature rhetorical pattern (LLM hallmark)

    The piece uses a repeated cadence:

    “They do not need to…”

    “This is not a… It is a…”

    “The question is not… The question is…”

    That kind of parallelism + inversion pattern is extremely common in AI outputs because models are trained to produce persuasive rhythm through repetition. Humans do it too—but usually with more variation and occasional sloppiness. This is too consistently clean.

    2. Over-optimized structure

    The essay is almost perfectly modular:

    Viral claim → reframed

    “Start with the owners”

    “Now, understand the mechanism”

    Platforms breakdown

    AI “final phase”

    Grand conclusion + call to action

    That clean progression reads like a prompted outline executed faithfully, which is typical of AI. Human writing usually meanders more or has uneven emphasis.

    3. Uniform tone and density

    Every paragraph:

    Same intensity

    Same sentence length range

    Same rhetorical weight

    There are no natural dips, tangents, or idiosyncratic phrasing. Humans almost always vary—this stays locked in a high-polish register throughout.

    4. Confident assertions without friction

    It makes strong claims (e.g., about control, intent, algorithms, geopolitics) with:

    No sourcing

    No uncertainty

    No specific insider detail

    AI tends to produce smooth, authoritative generalizations without the messy specifics a human expert would include (or hedge).

    5. Conceptual blending typical of AI

    It merges:

    Media ownership

    Algorithms

    Billionaires

    AI مستقبل (“endgame” framing)

    into one unified thesis without clearly distinguishing:

    What’s proven

    What’s speculative

    What’s opinion

    That kind of clean narrative stitching across domains is a classic LLM trait.

    6. The ending

    “You are not the audience. You are the fuel.”

    That’s a high-impact, slogan-like closer—very characteristic of AI-generated persuasive writing trained on viral content patterns.

    Probability AI-written: high

Leave a Comment