Policy Backgrounders
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you acknowledge our privacy policy and consent to the use of cookies. 

Policy Backgrounders

CED’s Policy Backgrounders provide timely insights on prominent business and economic policy issues facing the nation.

FEC Interpretive Rule on AI in Political Ads

September 25, 2024

Trusted Insights for What’s Ahead™

The rapid growth in the quantity and sophistication of generative-artificial intelligence (AI) is cause for concern as the November elections grow nearer, especially given the absence of federal legislation regulating the use of deepfakes – realistic yet fabricated media – in political ads, an issue CED examined in our Policy Backgrounder Regulating Political Deepfakes.

Last week, the Federal Election Commission (FEC) issued an Interpretive Rule clarifying that artificial intelligence (AI) falls under existing regulations issued under the Federal Election Campaign Act (FECA) barring fraudulent misrepresentation. The Commission voted 5-1 on the compromise drafted by Democratic Commissioners Dara Lindenbaum and Shana Broussard and Republican Commissioners Trey Trainor and Allen Dickerson. The move replaces the prospect of new rulemaking following a petition submitted by nonprofit advocacy group Public Citizen in May 2023.

  • The FEC’s action is similar to that of other agencies. In July, the Federal Communications Commission (FCC) issued a Notice of Proposed Rulemaking to require on-air and written disclosure of AI-generated content in radio and television political ads.  However, the FCC does not have jurisdiction over streaming video or digital platforms, potentially limiting the impact of the proposal. The agency in February issued a Declarative Ruling classifying AI-generated robocalls under the same standards as the Telephone Consumer Protection Act (TCPA), effectively making them illegal without prior customer consent. FDA has also approved medical devices using software that employs artificial intelligence/machine learning under its 510(k) premarket clearance authority rather than requiring new medical device approval applications.
  • There is a lack of evidence suggesting that the use of generative-AI affects actual electoral outcomes; rather, the major threat regarding deepfakes lies with increasing polarization and undermining credibility in institutions.
  • While the FEC’s action applies only to US persons, in this election Russia, Iran, and China are using AI tools to further divide Americans ahead of the election. For instance, Microsoft determined that a recent viral video featuring an actress falsely claiming that Vice President Kamala Harris had injured her in a hit-and-run was a product of Russian propaganda.

Authors

hubCircleImage