Poll: 3 in 4 fear artificial intelligence abuse in presidential election

SHARE NOW

More than 3 in 4 Americans fear abuses of artificial intelligence will affect the 2024 presidential election, and many are not confident they can detect faked photos, videos or audio.

AI & Politics ’24, led by Lee Rainie and Jason Husser at Elon University, found 78% believe it is likely artificial intelligence will be abused to impact the outcome between President Joe Biden and former President Donald Trump. There are 39% who believe artificial intelligence will hurt the election process, and just 5% believe it will help.

“Voters think this election will unfold in an extraordinarily challenging news and information environment,” said Rainie, director of Elon’s Imagining the Digital Future Center. “They anticipate that new kinds of misinformation, faked material and voter-manipulation tactics are being enabled by AI. What’s worse, many aren’t sure they can sort through the garbage they know will be polluting campaign-related content.”

The polling of 1,020 adults across the country age 18 or older was conducted by the Imagining the Digital Future Center and the Elon University Poll. Sampling took place April 19-21, was released Wednesday morning, has +/-3.2% margin of error, and 95% confidence level.

Other significant findings included 46% saying candidates who maliciously alter or fake photos, video or audio should be prevented from holding office, and 69% are not confident they could detect fake photos.

One in five, or 23%, of those responding say they have used large language models or chatbots such as ChatGPT, Gemini or Claude. Asked if those are biased, majorities of Democrats, Republicans and independents said they were not sure.

Asked about confidence in the voting process in this presidential election, 60% are “very” or “somewhat” confident people’s votes will be accurately cast and counted. By party, 83% of Democrats are confident, and 60% of Republicans are not confident.

Deeper into the questions on candidates’ misuse of artificial intelligence, the survey posed four choices with, “If it were proven that a political candidate had maliciously and intentionally digitally altered or faked photos, videos or audio files, should one of the penalties be:”

Ninety-three percent wanted punishment. Choices were no penalty (4%), a serious fine (12%), criminal prosecution (36%) and prevention from holding office, or removal from office if the candidate had won the election (46%). Patterns inside that data included women favoring prevention from holding office; and prosecution favored by households earning more than $100,000 and those with college educations. Republicans (17%) were more in favor than Democrats (8%) of a serious fine. The no penalty choice did not eclipse single-digit support in any group.

The poll found 61% of respondents very or somewhat confident they can get accurate and trustworthy news and information during the election. Their view of most voters, however, was only 28% could. In another question, 53% said it is very or somewhat easy “these days” to “get the political news and information” wanted.

“Misinformation in elections has been around since before the invention of computers, but many worry about the sophistication of AI technology in 2024 giving bad actors an accessible tool to spread misinformation at an unprecedented scale,” said Husser, professor of political science and director of the Elon University Poll. “We know that most voters are aware of AI risks to the 2024 election. However, the behavioral implications of that awareness will remain unclear until we see the aftermath of AI-generated misinformation.

“An optimistic hope is that risk-aware voters may approach information in the 2024 cycle with heightened caution, leading them to become more sophisticated consumers of political information. A pessimistic outlook is that worries about AI-misinformation might translate into diminished feelings of self-efficacy, institutional trust and civic engagement.”