NEW YORK — The Federal Communications Commission has advanced a proposal that would require political advertisers to disclose their use of artificial intelligence in broadcast television and radio ads, though it is unclear whether new regulations may be in place before the November presidential election.
The proposed rules announced Thursday could add a layer of transparency in political campaigning that some tech watchdogs have called for to help inform voters about lifelike and misleading AI-generated media in ads.
“There’s too much potential for AI to manipulate voices and images in political advertising to do nothing,” the agency’s chairwoman, Democrat Jessica Rosenworcel, said Thursday in a news release. “If a candidate or issue campaign used AI to create an ad, the public has a right to know.”
But the FCC’s action is part of a federal turf war over the regulation of AI in politics. The move has faced pushback from the chairman of the Federal Election Commission, who previously accused the FCC of stepping on his own agency’s authority and has warned of a possible legal challenge.
Political candidates and parties in the United States and around the world already have experimented with rapidly advancing generative AI tools, though some have voluntarily disclosed their use of the technology. Others have weaponized the technology to mislead voters.
The FCC is proposing requiring broadcasters to ask political advertisers whether their content was created using AI tools, such as text-to-image creators or voice-cloning software. The agency also aims to require broadcasters to make an on-air announcement when AI-generated content is used in a political ad and include a notice disclosing the use of AI in their online political files.
The commission acknowledges it would not have authority over streaming, leaving the growing political advertising industry on digital and streaming platforms unregulated at the federal level.
After the commission’s 3-2 vote, the proposal will move into a 30-day public comment period, followed by a 15-day reply period. Commissioners are then expected to finalize and pass a rule. It is unclear whether there is time for it to go into effect before a presidential election that is just over three months away.
Jonathan Uriarte, a spokesperson for Rosenworcel, said the chairwoman “intends to follow the regulatory process but she has been clear that the time to act is now.”
After Rosenworcel announced her proposed rule in May, FEC Chairman Sean Cooksey, a Republican, sent her a letter cautioning her against the move.
“I am concerned that parts of your proposal would fall within the exclusive jurisdiction” of the FEC and would “directly conflict with existing law and regulations, and sow chaos among political campaigns for the upcoming election,” he wrote.
If the FCC moves forward, it could create “irreconcilable conflicts” between the agencies that may end up in federal court, he said in the letter.
A Republican commissioner at the FCC, Brendan Carr, agreed with Cooksey, saying on the social media platform X in June that the FCC’s “plan to impose new regulations on political speech in the run up to the election is as misguided as it is unlawful.”
But the FEC’s vice chair, Democrat Ellen Weintraub, contradicted Cooksey in a separate letter to Rosenworcel, saying “no one agency currently has the jurisdiction or capacity to address every aspect of this large and complicated issue.”
The FCC maintains it has authority to regulate on the issue under the 1934 Communications Act and the Bipartisan Campaign Reform Act.
Congress has not passed laws directing the agencies on how they should regulate AI in politics. Some Republican senators have circulated legislation intending to block the Democratic-led FCC from issuing its new rules. Meanwhile, the FEC is considering its own petition on regulating deepfakes in political ads.
In the absence of federal action, more than one-third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
In February, the FCC ruled that robocalls containing AI-generated voices are illegal, a step that empowered the commission to fine companies that use AI voices in their calls or block the service providers that carry them.
___
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.
Be the first to comment