To Safeguard Democracy, Political Ads Should Disclose Use Of AI.
By Robert Seamans
U.S. Representative Yvette Clarke (D-N.Y.) recently proposed a bill that would require disclosure of use of artificial intelligence (AI) in the creation of political advertisements. This is a timely bill that should garner bipartisan support and help safeguard our democracy. Recent advancements in language modeling, exemplified by the popularity of ChatGPT, and image generation, exemplified by Dall-E 2 and Midjourney, make it much easier to create texts or images that are intentionally misleading or false. Indeed, there are examples of people already using these technologies to spread fake political news in the U.S. and abroad. In early April a number of fake AI generated images of President Trump mug shots circulated online, and later that month the Republican National Committee (RNC) responded to President Biden’s re-election campaign with an AI-generated ad. In May there were accusations of AI being used in deceptive political ads in the lead up to Turkey’s elections.
Fake news stories and doctored photos are not new phenomena. A well-known technique at fashion magazines is to digitally alter or “touch up” a celebrity’s appearance on a magazine cover. The goal is to drive magazine sales, and perhaps also drive publicity for the celebrity. Elections are a different matter that involve more consequential outcomes.
Allegations of fake news were rampant during the 2016 and 2020 U.S. elections. Perhaps as a result, distrust of the media has increased. According to Pew Research, Republican voters have experienced a particularly large drop in trust in news organizations. While some of this may be due to some politicians talking about news outlets as “fake news” (whether actually fake or not) some is likely also due to some exposure to or experience with actual fake news stories.
Read the full Forbes article.
___
Robert Seamans is Associate Professor of Management and Organizations and Director of the Center for the Future of Management