Yes, AI is allowed in political ads. But that may change soon

While lawmakers and the FEC have considered rules for the use of artificial intelligence in political advertising, there are no laws against it yet.

Artificial intelligence technology has improved significantly. Generative AI, which is technology that can create entirely new content, is making increasingly realistic images, video and audio. Now there’s growing concern that it could be used to impact elections by falsely portraying candidates’ words or actions.  

Meta, the parent company of Facebook and Instagram, announced on Nov. 8, 2023, that it would now require advertisers on its platforms to disclose the use of AI in political ads. This is similar to Google’s policy implemented in September that also requires political ads on its platforms to disclose the use of AI.

But is it even legal for politicians to use AI in their ads to suggest their rivals said or did something they didn’t? That’s what some VERIFY followers have asked on social media.

THE QUESTION

Is AI allowed in political ads?

THE SOURCES

THE ANSWER

This is true.

Yes, AI is allowed in political ads. However, federal agencies are currently considering restricting the use of deceptive AI in political ads and requiring disclosures when AI is used in political ads on TV and other platforms.

Sign up for the VERIFY Fast Facts newsletter here

WHAT WE FOUND

There are currently no federal rules prohibiting the use of artificial intelligence in political ads. However, the Federal Election Commission (FEC) and lawmakers are considering options for regulating AI in political advertising, such as by prohibiting the use of AI to fraudulently misrepresent another candidate or by requiring a disclosure when AI is used to create a political ad.

On Aug. 16, 2023, the FEC announced it was seeking public comment on a petition for the FEC to amend a rule that already prohibits a candidate from “fraudulently misrepresenting other candidates or political parties” so that it applies to “deliberately deceptive Artificial Intelligence campaign advertisements.” 

The petition came from Public Citizen, a nonprofit consumer advocacy organization, because the organization said it is becoming “increasingly difficult and, perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media.”

Currently, the FEC prohibits candidates or people who work for candidates from fraudulently speaking, writing or otherwise acting on behalf of another candidate or organization in a manner that would damage that other candidate or organization. For example, a candidate’s campaign couldn’t make a commercial saying the campaign would accept bribes on behalf of a competing candidate.

The FEC’s notice in the Federal Register says Public Citizen believes a candidate’s deepfake audio or video clip could be used to fraudulently misrepresent and harm another candidate by “falsely putting words into another candidate's mouth, or showing the candidate taking action they did not [take].”

But Public Citizen is not requesting the FEC prohibit all uses of AI in political advertising. It drew a distinction between deepfakes and AI either not used to deceive voters or where there is a prominent disclosure that the image, audio or video was generated by AI and portrays fictitious actions or statements.

If the FEC decides the petition has merit, it could initiate the rulemaking process necessary to turn the requested regulation into law. However, the FEC has yet to say if it will take that next step.

A bipartisan group of senators also introduced a bill in September to “prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office.” As of Nov. 11, 2023, the Senate has not taken action on the bill since it was introduced on Sept. 12 by Sen. Amy Klobuchar (D-MN), Sen. Josh Hawley (R-MO), Sen. Susan Collins (R-ME) and Sen. Christopher Coons (D-DE). 

At this moment, the FEC does not have any other rules regarding the use of AI in political advertising. A letter from Democratic members of Congress encouraging the FEC to act on the AI petition also included a request for the FEC to further regulate AI in political advertising by requiring “disclaimers on campaign advertisements that include content created by generative AI.”

It’s not the first time Democratic senators have brought up the possibility of requiring AI-generated political ads to include disclaimers. Sen. Amy Klobuchar (D-MN), Sen. Cory Booker (D-NJ) and Sen. Michael Bennet (D-CO) introduced legislation in May that would require a political ad to include a statement within the ad if generative AI was used for any image or video footage in the advertisement.

There was no further action on the bill after it was introduced in the Senate.

Currently, there is no law passed by Congress that regulates the use of AI in political or any other kind of advertising.

On Oct. 30, the Biden administration issued an executive order regarding the development and use of AI. While the executive order requires federal agencies to protect Americans from “AI-enabled fraud,” it does not say anything directly regarding advertising.

Google, Facebook and Instagram acted voluntarily when they implemented rules requiring advertisers to disclose the use of AI in political ads, as there isn’t currently a federal law requiring such disclosures on digital ads.

X, formerly known as Twitter, does not currently have a rule directly addressing the use of AI in political advertising, although it does have a policy against sharing “synthetic or manipulated media that are likely to cause harm.”

The VERIFY team works to separate fact from fiction so that you can understand what is true and false. Please consider subscribing to our daily newsletter, text alerts and our YouTube channel. You can also follow us on Snapchat, Instagram, Facebook and TikTok. Learn More »

Follow Us

Want something VERIFIED?

Text: 202-410-8808

Related Stories