Dr. Caroline Leicht
Tutor at the University of Glasgow.
Caroline’s research focuses on the intersection of gender, media and politics.
E-Mail: Caroline.Leicht@glasgow.ac.uk
Twitter: @carolineleicht
BlueSky: @carolineleicht.bsky.social
Dr. Peter Finn
Senior Lecturer in Politics at Kingston University, London. His research focuses on various aspects of democracy.
E-Mail: p.finn@kingston.ac.uk
Twitter: @Pete_D_Finn
Prof. Lauren C. Bell
James L. Miller Professor of Political Science at Randolph-Macon College.
She is a former APSA Congressional Fellow.
E-Mail: lbell@rmc.edu
Bluesky: @lmcbell.bsky.social
Dr. Amy Tatum
Lecturer in Communication and Media at Bournemouth University.
She explores media, persuasion and communication.
Email: atatum@bournemouth.ac.uk
U.S. Election 2024
64. Reversion to the meme: A return to grassroots content (Dr Jessica Baldwin-Philippi)
65. From platform politics to partisan platforms (Prof Philip M. Napoli, Talia Goodman)
66. The fragmented social media landscape in the 2024 U.S. election (Dr Michael A. Beam, Dr Myiah J. Hutchens, Dr Jay D. Hmielowski)
67. Outside organization advertising on Meta platforms: Coordination and duplicity (Prof Jennifer Stromer-Galley)
68. Prejudice and priming in the online political sphere (Prof Richard Perloff)
69. Perceptions of social media in the 2024 presidential election (Dr Daniel Lane, Dr Prateekshit “Kanu” Pandey)
70. Modeling public Facebook comments on the attempted assassination of President Trump (Dr Justin Phillips, Prof Andrea Carson)
71. The memes of production: Grassroots-made digital content and the presidential campaign (Dr Rosalynd Southern, Dr Caroline Leicht)
72. The gendered dynamics of presidential campaign tweets in 2024 (Prof Heather K. Evans, Dr Jennifer Hayes Clark)
73. Threads and TikTok adoption among 2024 congressional candidates in battleground states (Prof Terri L. Towner, Prof Caroline Muñoz)
74. Who would extraterrestrials side with if they were watching us on social media? (Taewoo Kang, Prof Kjerstin Thorson)
75. AI and voter suppression in the 2024 election (Prof Diana Owen)
76. News from AI: ChatGPT and political information (Dr Caroline Leicht, Dr Peter Finn, Dr Lauren C. Bell, Dr Amy Tatum)
77. Analyzing the perceived humanness of AI-generated social media content around the presidential debate (Dr Tiago Ventura, Rebecca Ansell, Dr Sejin Paik, Autumn Toney, Prof Leticia Bode, Prof Lisa Singh)
ChatGPT has become a popular source to gain an overview of topical issues quickly and easily, including politics. But how accurate is it for political information? A central tenet of democracy is that voters (and citizens in general) should have sufficient information about the policies and actions of those who govern them to make an informed decision about who to vote for. As generative artificial intelligence becomes more embedded into our work and private lives, what then are the possibilities offered by such tools like ChatGPT to provide the public with meaningful political information?
With the 50 States or Bust! project, we set out to explore these types of questions in a systematic rather than an anecdotal fashion. We began by developing a standard list of ChatGPT4 prompts that could be tweaked for all US states and territories. These prompts were designed to provide insight into how ChatGPT4 would respond to inquiries about the history and politics of US states and territories, and to see which sources of information would be provided when requested by the prompts. The prompts were put into ChatGPT4 for all states and territories, generating 56 profiles.
Following the creation of these profiles we began speaking to academic experts on the respective states and territories about these profiles. Using a standard list of questions, we asked the experts to share information about the states and territories they study and to qualitatively and quantitatively rate the ChatGPT responses. At the time of this writing, we have carried out 19 interviews with experts; the interviews have been developed into short podcasts as well. Our results so far provide initial insights into how generative AI contributes to biases around the nationalization of US politics.
Generally speaking, ChatGPT4 is not a reliable source for political information. Our experts find more than 40% of the profiles generated contained factual inaccuracies. Further, despite the fact that our prompts asked for sources in all instances, the information provided was often so vague as to be of little use. Our interviewees find that ChatGPT has a tendency to hallucinate false information, meaning that it is hard to judge the validity of any information or factual assertions it provides. ChatGPT also hallucinates citations, drawing from “sources” that do not exist. We find these hallucinated responses occur far more frequently than previous studies have suggested.
In addition to the misinformation in ChatGPT responses to prompts related to US politics, we also observed how information provision can affect ChatGPT outputs. As a generative AI tool, ChatGPT draws on information that is already available. But what if there are issues in the source information? For instance, our analysis of the ChatGPT profiles of all US states and territories shows that the nationalization of US politics is very much reflected in the amount and quality of information provided by ChatGPT. With more local news “deserts” across the US, ChatGPT has few and sometimes no sources to draw upon in order to provide information on local political issues or state-level races. Similarly, the expert interviews suggest that the profiles were biased towards national politics. As a result, users of ChatGPT are vulnerable to receiving incomplete information or misinformation based on hallucinated sources. As we know from previous research on media and politics, what people see in the media – or in their preferred news sources – can have significant effects on their political behavior, which may impact how they discuss politics with their peers or online, and even how they vote. A broader concern arising out of the study findings is the potential for the almost unlimited generation of large amounts of AI generated content, untethered from evidence, to undermine public faith in evidence or the idea of truthful narratives.
Overall, preliminary results of our project show that ChatGPT is not a reliable source for political information across US states and territories, raising substantial implications for democracy and information technology. Regarding the former, democracy depends on an educated citizenry and is threatened when voters act based on mis- and disinformation; to the extent that AI provides false or misleading political information, it may exacerbate democratic erosion. Regarding the latter, there is an urgent need for technology companies to work towards ensuring that their generative AI tools can provide reliable information and, if they cannot, to ensure that users are informed of the inability of these tools to meet high standards for accuracy and transparency. As voters continue to rely on alternative news sources, including generative AI, there is a clear need for more research and policy attention on these technologies to understand not only what information voters receive but also how to improve the information provision in ChatGPT and other generative AI tools so that they can serve more effectively as political sources.