The warnings have grown louder and more urgent as 2024 approaches: the rapid advancement of artificial intelligence threatens to amplify disinformation in next year’s presidential election on a scale never seen before.
Most adults in the United States think the same, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
The survey found that nearly 6 in 10 adults (58%) believe that AI tools — which can target political audiences, produce mass persuasive messages, and generate highly realistic but fake images and videos in a matter of seconds — will increase the spread of information. false or misleading during next year’s elections.
In contrast, only 6% believe AI will reduce the spread of misinformation, while a third said it won’t make much difference.
“Look what happened in 2020, and that was only with social networks,” said Rosa Rangel, a 66-year-old woman from Fort Worth, Texas.
Rangel, a Democrat who said she has seen many “lies” on social media in 2020, he said he thinks AI will make things worse in 2024, like a boiling pot overflowing.
Only 30% of American adults have used chatbots or AI imagers, and less than half (46%) have heard or read at least something about AI tools. Still, there is broad consensus that candidates should not use AI.
Asked whether it would be good or bad for the 2024 presidential candidates to use artificial intelligence in some way, the vast majority said it would be bad if they created false or misleading content for political ads (83%), edited or retouched photos or videos for political ads (66%), tailoring political ads to individual voters (62%), and answering voter questions via chatbots (56%).
These sentiments are supported by majorities of Republicans and Democrats, who agree that it would be negative for presidential candidates to create fake images or videos (85% of Republicans, and 90% of Democrats) or answer voters’ questions with AI (56% of Republicans, and 63% of Democrats).
However, AI has already been used in the Republican presidential primaries.
The Republican National Committee released a completely AI-generated ad in April that sought to show the country’s future if President Joe Biden was re-elected.
He used fake but realistic-looking photos showing boarded-up businesses, military patrols with armored vehicles in the streets, and waves of migrants spreading panic. The ad said in small print that it had been generated by AI.
Florida Governor Ron DeSantis also used AI in his campaign for the Republican Party nomination. He promoted an ad that used AI-generated images to make it look like former President Donald Trump was hugging Dr. Anthony Fauci, the infectious disease expert who oversaw the national response to the COVID-19 pandemic.
Never Back Down, a political advocacy committee that supports DeSantis, used an AI tool to imitate Trump’s voice and made it appear as if he were reading a social media post.
“I think they should campaign on their merits, not on their ability to strike fear into the hearts of voters.”said Andie Near, a 42-year-old woman from Holland, Michigan, who typically votes Democratic.
Near has used AI tools to retouch images in his museum work, but said he thinks politicians who use technology to confuse may “aggravate and worsen the effect that even conventional messages can cause.”
Republican-leaning college student Thomas Besgen also doesn’t approve of campaigns that use digitally manipulated sounds or images to make it seem like a candidate has said something he or she never said.
“Morally, that is wrong,” said the 21-year-old from Connecticut.
Besgen, a mechanical engineering student at the University of Dayton, Ohio, said he agrees with banning deepfake ads or, if that’s not possible, requiring them to be labeled as generated by AI.
The US Federal Election Commission is considering filing a petition to regulate AI-generated deepfakes in political ads ahead of the 2024 elections.
Although skeptical of the use of AI in politics, Besgen said he was enthusiastic about its potential for the economy and society. He regularly uses AI tools like ChatGPT to better understand history topics that interest him or to brainstorm ideas.
Also use the image generators for fun; for example, to imagine what certain sports stadiums would look like in 100 years.
He said he generally trusts the information he gets through ChatGPT and is likely to use it to learn more about presidential candidates, something only 5% of adults said they are likely to do.
According to the survey, Americans consult the media (46%), friends and family (29%), and social networks (25%) more to learn about the presidential elections than to artificial intelligence chatbots.
“Whatever answer he gives me, I would take it with a grain of salt,” Besgen said.
The vast majority of Americans are similarly skeptical of the information AI chatbots provide. Only 5% say they trust a lot that the information is true, while 33% say they trust somewhat, according to the survey.
Most adults (61%) say they have little or no confidence that information is reliable
That’s in line with warnings from many AI specialists about using chatbots to gather information. The large artificial intelligence linguistic models that typically power chatbots work by selecting the most plausible words to form a sentence, which makes them good at imitating writing styles, but also prone to making things up.
Adults sympathetic to both major political parties tend to be open to AI regulation. They responded more positively than negatively to various ways of banning or labeling AI-generated content that could be imposed by technology companies, the federal government, social media companies, or media outlets.
About two-thirds favor the government banning AI-generated content that contains false or misleading images in political ads, while a similar number want tech companies to label all AI-generated content on their platforms.
Biden launched some federal guidelines for artificial intelligence on Monday by signing an executive order to guide the development of that technology. The order calls on the industry to develop safety and security standards, and directs the Department of Commerce to issue guidance for labeling and watermarking AI-generated content.
Americans largely view avoiding false or misleading AI-generated information during the 2024 presidential election as a shared responsibility.
About 6 in 10 (63%) say much of the responsibility falls on the tech companies that create AI tools, but about half place much of that responsibility on the media (53%), companies of social networks (52%) and the federal government (49%).
Democrats are somewhat more likely than Republicans to say that social media companies bear a lot of responsibility, but they generally agree on the level of responsibility of technology companies, the media and the federal government.
Source: AP
Source: Gestion

Ricardo is a renowned author and journalist, known for his exceptional writing on top-news stories. He currently works as a writer at the 247 News Agency, where he is known for his ability to deliver breaking news and insightful analysis on the most pressing issues of the day.