USA: Communications Commission considers regulating AI political ads on radio and TV

USA: Communications Commission considers regulating AI political ads on radio and TV

The president of the Federal Communications Commission of USA (FCC) on Wednesday presented a proposal to require political advertisers to report when they use content generated by artificial intelligence in radio and television ads.

If adopted by the five-member commission, the proposal would add a level of transparency that many lawmakers and AI experts have been calling for amid the rapid advancement of generative AI tools that produce realistic images, videos and audio clips that can confuse to voters in the upcoming US elections.

However, the country’s main telecommunications regulator would only have authority over television, radio and some cable television providers. The new rules, if adopted, would not cover the extraordinary increase in advertising in digital and streaming platforms.

“As artificial intelligence tools become more accessible, the commission wants to ensure that consumers are fully informed of when the technology is being used,” said FCC Chairwoman Jessica Rosenworcel, in a statement on Wednesday. “Today I shared with my colleagues a proposal that makes clear that consumers have the right to know when AI tools are used in the political ads they see, and I hope they will act quickly on this matter.”

With this proposal, it is the second time this year that the commission has begun to take significant measures to combat the growing use of artificial intelligence tools in political communication. Previously, the FCC had confirmed that security tools voice cloning using AI in automated calls are prohibited by current legislation. That decision was made after an episode that occurred in the primary elections of N.H.when robocalls were made in which voice cloning software was used to imitate President Joe Biden in order to discourage voters from going to the polls.

If approved, the proposal announced Wednesday would ask broadcasters to verify with political advertisers whether their content has been generated with AI tools, such as text to image creators or voice cloning programs. The FCC has authority over political advertising broadcast by radio and TV channels under the Campaign Reform Act of 2002.

Commissioners will have to debate several details of the proposal, such as whether broadcasters will have to disclose AI-generated content in an on-air message or only in the radio or television station’s political archives, which are public. They will also have to come to an agreement on the definition of AI-generated content, a challenge that has become difficult as retouching tools and other AI advances become increasingly integrated into all types of creative software.

Rosenworcel hopes the regulations will take effect before the elections.

Jonathan Uriarte, spokesperson and political advisor for Rosenworcel, said that the FCC chair intends to define content generated by AI as that generated by computer technology or machine-based systems, “including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.” But he said the draft definition of it will likely change throughout the regulatory process.

The proposal comes at a time when the political campaigns have already experimented a lot with generative AI, since the creation of chatbots for their websites to the creation of videos and images using this technology.

Last year, for example, Republican National Committee ran an entirely AI-generated ad that attempted to show a dystopian future in the event of another Biden administration. It used false but realistic photos that showed boarded-up businesses, military patrols in the streets and waves of migrants spreading panic.

Political campaigns and malicious actors have also used highly realistic images, videos, and audio content to scam, deceive, and disenfranchise voters. In the elections of Indiathe recent AI-generated videos in which stars of bollywood criticize the prime minister are an example of a trend that AI experts say is appearing in democratic elections around the world.

Rob Weissman president of the advocacy group Public Citizen, said he was glad to see the FCC “taking a step forward to proactively address threats from artificial intelligence and deepfakesespecially for the integrity of the elections.”

Weissman urged the FCC to require that AI notice be aired for the public’s benefit, and criticized another body, the Federal Election Commission, for its delays, as it is also studying the possibility of regulating AI-generated deepfakes in political advertising.

The Democratic representative from New York, Yvette Clarke He said the time has come for Congress to act against misinformation on the internet, over which the FCC has no jurisdiction. Clarke has introduced a bill to require reporting of AI-generated content in online ads.

As generative AI has become cheaper, more accessible and easier to use, several bipartisan groups of lawmakers have called for legislation to regulate the technology in politics. With just over five months until the November elections, no bill has yet been approved.

A bipartisan bill presented by the senators Amy KlobucharDemocrat, and LIsa Murkowski, Republican, would require a clause to be included in political ads that are made or significantly modified using AI. Additionally, it would force the Federal Election Commission to respond in the event of a violation.

Uriarte said Rosenworcel is aware that the FCC’s jurisdiction to act against AI-related threats is limited, but wants to do what it can before the 2024 elections.

“This proposal provides the highest standards of transparency that the commission can enforce under its jurisdiction,” Uriarte said.

“We hope that government agencies and legislators can build on this important first step to establish a standard of transparency on the use of AI in the political advertising”.

Source: Gestion

You may also like

Immediate Access Pro