How U.S. States Are (and Aren’t) Tackling Election AI Generated Deepfakes
Artificial‑intelligence deepfakes have sprinted from novelty to democratic threat in just a few election cycles. In 2024–25 alone, U.S. states filed 134 bills aimed at curbing AI‑generated deception, but what do these laws do, and where are the gaps? My recent research for Texas State University dug into 30 enacted statutes, and the results are surprising, encouraging, and frustrating all at once. Here’s the story.
The Big Picture: Lots of Labeling, Little Public Voice
Most lawmakers reached for the same tool: mandatory disclosure labels.
83 % of state laws now require AI‑generated political ads to carry a “manipulated” or similar tag
47 % criminalize malicious deepfake distribution, often elevating penalties near Election Day
Yet only 3 % of statutes embed any crowdsourcing, user‑reporting, or media‑literacy provisions, leaving voters largely on the sidelines
Why the Label‑First Approach Falls Short
Labels are a good start—they slow a share‑and‑forget culture—but they’re not a silver bullet. Deepfake campaigns move faster than regulators, and platform algorithms still amplify eye‑catching fakes. Worse, nearly every law puts compliance on creators while giving platforms a pass on proactive detection or removal
What an Effective Hybrid Strategy Looks Like
Based on the data and comparative policy analysis, a smarter framework blends three levers:
Lever #1 : Public Participation
Why It Matters: Human eyes still spot nuance that algorithms miss.
Example Actions: Instant in‑app “report deepfake” buttons, state‑funded digital‑literacy blitzes.
Lever #2: Technical Guardrails
Why It Matters: Labels alone ≠ deterrence.
Example Actions: Machine‑readable watermarks; 24‑hour takedown clock for flagged election content.
Lever #3: Legal Accountability
Why It Matters: Bad actors follow incentives.
Example Actions: “Foreseeable harm” felony standard + fines for platforms that ignore verified alerts.
California is edging toward this mix. AB 730 prescribes strict label formatting; AB 2655 mandates a user‑reporting workflow and 72‑hour removals. Other states can borrow and refine.
Key Takeaways for Policymakers, Platforms, and Voters
Speed beats nuance in the misinformation game. Laws must shorten enforcement timelines or pair with real‑time platform action.
Empowered citizens are a force multiplier. Media‑literacy programs and crowdsourced flagging cost far less than endless legal battles.
Transparency builds trust. Public dashboards showing how many deepfakes were removed—and how fast—create accountability.
One state can lead, but federal harmonization will matter by 2026. Without it, manipulators exploit jurisdictional gaps.
Closing Thoughts
Deepfakes will only get cheaper and more convincing,the $11, 8‑minute fake of 2023 is tomorrow’s 30‑second phone app. If we want 2026 and 2028 elections to hinge on real debate rather than synthetic scandals, the policy conversation must shift from labels and laws to labels, laws, literacy, and live platform cooperation.
Some states have taken some steps. Now the U.S. Federal Government needs to sprint.
Stats sourced from my qualitative review of 30 state laws enacted 2024–25