Prof. Diana Owen
Professor and Director of the Civic Education Research Lab at Georgetown University. She teaches courses on media and politics, political engagement, and statistical methodology. Her research focuses on the evolution of new media in politics, elections and voting behavior, civic education, and political socialization. She has published books on media and American elections, new media and politics, and politics in the information age.
Email: owend@georgetown.edu
Website: Civic Education Research Lab
U.S. Election 2024
64. Reversion to the meme: A return to grassroots content (Dr Jessica Baldwin-Philippi)
65. From platform politics to partisan platforms (Prof Philip M. Napoli, Talia Goodman)
66. The fragmented social media landscape in the 2024 U.S. election (Dr Michael A. Beam, Dr Myiah J. Hutchens, Dr Jay D. Hmielowski)
67. Outside organization advertising on Meta platforms: Coordination and duplicity (Prof Jennifer Stromer-Galley)
68. Prejudice and priming in the online political sphere (Prof Richard Perloff)
69. Perceptions of social media in the 2024 presidential election (Dr Daniel Lane, Dr Prateekshit “Kanu” Pandey)
70. Modeling public Facebook comments on the attempted assassination of President Trump (Dr Justin Phillips, Prof Andrea Carson)
71. The memes of production: Grassroots-made digital content and the presidential campaign (Dr Rosalynd Southern, Dr Caroline Leicht)
72. The gendered dynamics of presidential campaign tweets in 2024 (Prof Heather K. Evans, Dr Jennifer Hayes Clark)
73. Threads and TikTok adoption among 2024 congressional candidates in battleground states (Prof Terri L. Towner, Prof Caroline Muñoz)
74. Who would extraterrestrials side with if they were watching us on social media? (Taewoo Kang, Prof Kjerstin Thorson)
75. AI and voter suppression in the 2024 election (Prof Diana Owen)
76. News from AI: ChatGPT and political information (Dr Caroline Leicht, Dr Peter Finn, Dr Lauren C. Bell, Dr Amy Tatum)
77. Analyzing the perceived humanness of AI-generated social media content around the presidential debate (Dr Tiago Ventura, Rebecca Ansell, Dr Sejin Paik, Autumn Toney, Prof Leticia Bode, Prof Lisa Singh)
The 2024 contest has been dubbed “the first election of the AI era.” Some observers go so far as to claim that AI is a new participant in campaigns. Thousands of voters sought basic campaign information about how and where to cast their ballots from chatbots. AI-generated attack ads promoted false messages and images against opponents. AI campaign volunteers chatted with voters about issues, and even conducted conversations in different languages. Foreign governments spread deepfake videos across social media in attempts to interfere in the election.
AI was a shadowy presence in the election from the very start. It was widely used during the nomination campaign. AI-generated images depicted Donald Trump with members of constituency groups he was courting. In one photo, Trump is surrounded by a group of young black men in what appears to be their neighborhood. Democrats in New Hampshire who opposed Joe Biden’s nomination used an AI-generated robocall that replicated Biden’s voice telling people to save their vote and not turn out. Taylor Swift brought awareness to AI in the election immediately following the September presidential debate between Kamala Harris and Trump. A doctored AI-generated image originally created by a Biden supporter appeared on a pro-Trump account on X, which read: “Taylor wants you to vote for Donald Trump.” Trump reposted the image on his Truth Social account with the caption, “I accept!” In endorsing Harris, Swift wrote, “Recently I was made aware that AI of ‘me’ endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation.”
There has been much speculation and increasing evidence about the dangers of AI in elections. A shrouded and underreported peril is AI’s potential to suppress voter turnout. The U.S. has a long history of restricting voting. Barriers – such as literacy tests and poll taxes – prohibited racial minorities, poor people, younger voters, older voters, and people with disabilities from casting a ballot. While the Voting Rights Act of 1964 was successful in curbing voter suppression for several decades, states began enacting policies to keep minority groups from voting following the election of Barack Obama in 2008. Twenty-four states have implemented more restrictive voting policies since 2020. During the 2024 election, eligible voters were purged from the voting rolls in Virginia. In the battleground state of Georgia, it was illegal for political organizations to hand out bottles of water to people standing in line at polling places.
AI can be used to suppress voting by spreading disinformation and through manipulation and intimidation of voters. Traditional techniques for this type of voter suppression employ flyers, robocalls, and emails. AI can work to disengage and alarm voters more quickly, easily, and efficiently than these tactics. The proliferation of disinformation generated by AI may be the most consequential new development in the 2024 election cycle. AI can create content that promotes voter apathy and distrust. A study by the AI Democracy Project found that chatbots that simulate human conversation were regularly tricked into providing wrong, controversial, offensive, or dangerous information to voters. When five commonly used chatbots were asked if Biden legitimately won the 2020 presidential election, three correctly indicated that Biden had beaten Trump, one said that the election was still undecided, and another responded that the election was stolen. AI can create false “evidence” of crimes and malfeasance that casts doubt on the security of the voting process. It can be used to sensationalize election stories and invent scandals. The Center for Countering Digital Hate found that 41% of images of alleged election tampering in 2020 were created by AI, including photos of boxes of ballots in a dumpster and ballots in a ditch in Arizona.
AI was used to micro-target marginalized communities to generate fear and keep voters away from the polls. Voters were confused by fake text messages about voter registration and phony emails telling them to go to the wrong polling place. Campaigns used AI to target voters they believe supported the opposition by telling them to vote at home, which is not possible in any state, to avoid long lines at the polls. Migrant communities were sent messages that the U.S. Immigration and Customs Enforcement would be patrolling polling places looking to deport voters. AI disseminated deceptive rumors that voter fraud in 2024 was rampant. Tools, such as EagleAI, were used to scrutinize voter rolls and cast doubts on voter eligibility. These challenges were rarely upheld, but they cast doubt on election results.
AI’s capacity to influence campaigns remains largely unknown. The implications of AI in elections depend on how people use the technology – for good or for ill. The AI environment is largely unregulated, opening opportunities for abuse. In 2024, serious issues with AI contributed to an already difficult and chaotic election and mitigated the positive potential of the technology. The question of how powerful AI will become in future elections is yet to be determined.