Voters Overwhelmingly Believe in Regulating Deepfakes and the Use of Artificial Intelligence

By Tenneth Fairclough II and Lew Blank 

The use of deepfakes, a rapidly growing form of AI-generated media that makes someone appear to be saying or doing something they never actually said or did, has proliferated across social media platforms. Last month, singer-songwriter Taylor Swift was the latest victim of the synthetic media when AI-generated sexually explicit images of her circulated across X (formerly Twitter). This has renewed calls from advocates and lawmakers, including President Biden, to demand more accountability and regulation toward AI companies that generate this media.  

New polling from Data for Progress and Accountable Tech examined voters’ sentiment about the use of deepfakes during the November 2024 election. Additional polling from Data for Progress also tested whether voters support proposed legislation and existing executive orders to regulate deepfakes while examining other concerns about artificial intelligence.  

After being provided with a short description of how deepfakes are used to create convincing images, audio, and videos to represent someone saying or doing something that they never said or did, a strong majority of voters (80%) say they are concerned with the use of deepfakes of candidates and political figures during the November 2024 election. This sentiment is shared among voters across party lines, with Democrats (82%), Independents (80%), and Republicans (79%) saying they are concerned about the use of this form of synthetic media in the upcoming election.  

 
 

Next, voters were asked whether they believe AI companies should or should not be required to label AI-generated content that is used to influence an election. A clear majority of likely voters (83%) believe companies should be required to label their AI-generated content when their product is used to influence an election. There is a strong consensus across party lines: 85% of Democrats, 84% of Independents, and 81% of Republicans believe AI companies should be required to label AI-generated content when it’s used to influence an election.

 
 

In a separate survey, voters were asked about proposed legislation from various state legislatures that would require anyone who has created AI-generated media to disclose that it is fake content — for example, by adding a watermark on a video — when it makes someone appear to be saying or doing something that they didn't. This proposal is overwhelmingly popular, with at least 80% of Democrats, Independents, and Republicans supporting it.

Support is also overwhelming for the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, recently proposed by a bipartisan group of U.S. senators. This legislation would allow Americans to sue someone who created AI-generated media that made them appear to be in a sexually explicit position without their consent. The DEFIANCE Act is supported by at least 85% of Democrats, Independents, and Republicans.

 
 

If enacted, these proposals would build upon an executive order announced by the Biden administration last October to address the use of artificial intelligence. Majorities of voters — including majorities of Independents — support key actions from this executive order: completing risk assessments about the use of AI in critical government sectors (73%), creating a national research database for AI data, AI software, and AI models (68%), and hiring more AI professionals and data scientists across the federal government (57%).

 
 

Together, these findings demonstrate that voters are highly concerned about the use of deepfakes in the November 2024 election, with more than three-quarters of voters across party lines saying they are very or somewhat concerned. To respond to deepfakes and other issues with artificial intelligence, voters support requiring AI companies to label AI-generated content that could influence the election, allowing people to sue over sexually explicit AI content that depicts them, and the Biden administration’s executive order on AI.


Tenneth Fairclough II (@tenten_wins) is a polling analyst at Data for Progress.

Lew Blank (@LewBlank) is a communications strategist at Data for Progress.

Survey Methodology

From January 31 to February 1, 2024, Data for Progress conducted a survey of 1,231 U.S. likely voters nationally using web panel respondents. The sample was weighted to be representative of likely voters by age, gender, education, race, geography, and voting history. The survey was conducted in English. The margin of error is ±3 percentage points.

From February 2 to 5, 2024, Data for Progress conducted a survey of 1,225 U.S. likely voters nationally using web panel respondents. The sample was weighted to be representative of likely voters by age, gender, education, race, geography, and voting history. The survey was conducted in English. The margin of error is ±3 percentage points.

Abby Springs