Social media moderation of political talk

Dr Shannon C. McGregor

Shannon C McGregor (PhD, University of Texas) is an assistant professor at the Hussman School of Journalism and Media, and a senior researcher with the Center for Information, Technology, and Public Life – both at the University of North Carolina. Her research addresses the role of social media and their data in political processes, with a focus on political communication, journalism, public opinion, and gender.

Twitter: @shannimcg

Section 7: Democracy in crisis

If the post-2016 story was about the content that was on social media platforms – from Russian bots to Macedonian teens – the post-2020 story is gearing up to be about what was not on social media platforms – or at least not all the way on them. For all their cautious line-toeing in the run up to the 2020 election, platforms like Twitter and Facebook showed a remarkable appetite to make moderation decisions from which they had previously shirked. Though they had resisted the label earlier, social media platforms became clear arbiters of political truth in 2020 – when they saw democracy at risk and acted to protect it (as well as their bottom lines).

Before the election started, we saw lots of moves from platform companies how they would handle the 2020 election. First, Twitter banned political ads. Facebook said it would not correct false claims in posts or ads from politicians. Throughout the summer, protests against police brutality and for racial justice swept the nation – often encouraged through social media activism (#BlackLivesMatter) but also threatened by it (such as when the president tweeted threats of violence against protestors). Twitter first signaled its appetite for policy enforcement against the president when it took action against the aforementioned tweet. Then, as the president made crystal clear that he would not accept any outcome other than victory, stirred violence, and refused to commit to a peaceful transfer of power – platforms moved, almost swiftly, to proactively (if also perhaps belatedly) protect democracy. Twitter labeled – and limited the reach – of an unprecedented amount of the president’s tweets as they baselessly alleged voter fraud and falsely claimed electoral victory. It’s unclear as to whether Facebook limited the reach of the president’s posts, but it did append labels to them that pointed users in the direction of truthful contextual information about the election. Facebook also removed groups organizing around false claims of electoral fraud (“Stop the Steal” groups) – though more continued to pop up.

What is most remarkable about these actions is that they were taken specifically on the accounts of political leaders, especially the president himself. Finally, social media platforms acknowledged that political elites – both elected officials but also their surrogates – were the largest purveyors of political misinformation. What makes political misinformation from political leaders particularly pernicious is that it often casts politics in identitarian terms – that media, and especially social media, are essential sites for constructing and conveying a politician’s identity as well as the groups of constituents they purport to represent. Though the information – about “stolen” ballots or electoral fraud – is indeed false, these posts are not about the information per se. Instead, they communicate whose votes matter, whom should be seen as citizens, wield power, and ultimately what types of people get a say in electing presidents.

Platforms are downstream from politics and political life. What animates our politics also animates our politics on platforms – and is shaped by platforms. While algorithms may put their thumb on the divisions in our country, they do not deterministically create them. Simply put: our social reality is reflected and distorted – no created – on social media platforms. We have a political problem in this country: right-wing misinformation, shorn up by making in-group identity threats salient and aimed at undermining public trust in institutions – the press, the electoral process, public health – is pervasive. Any attempt to craft these political issues as problems simply of social media and information risks centering another four years of academic, press, and press attention around the wrong targets.

In the wake of 2016, the press and academia focused on the informational quality of posts on social media. We trained our eye towards whether information in these posts was true or false, toward how many people potentially encountered false information, and whether or not it swayed voters towards electing Trump. As my colleagues and I have argued – this attention is misguided and has clearly had an outsized impact on public opinion about the effects of misinformation. Research in this area should focus on how mis- and dis-information entrenches existing divides along the lines of our partisan identities. And it should embrace a focus on the very sort of elite communication against which the platforms – finally – took action. As Francesca Tripodi observed, “… there is reason to believe [Trump] and other conservative politicians are priming their constituents to think that Big Tech rigged the 2020 election in Democrats’ favor.”

If the post-2020 story becomes informational in focus – not about what was on social media platforms, but what was moderated by social media platforms – we will again miss the mark. Another four-year cycle of public discourse, press attention, and research focus centered narrowly around platform moderation would be a mistake – this is not a platform problem, but a political problem. And Trump, the Republican party, conservative elites, racial structures, and more broadly – politics – should be at the center of and lead research on social media moderation.