Platform transparency in the fight against disinformation

22,247. That is the estimated number of false or misleading claims that President Donald Trump has made since assuming office until the end of August, 2020. The 2016 election of Donald Trump as President of the United States challenged journalists and fact-checkers in numerous ways, especially when it comes to sorting false from correct information. During the final stretch of the 2020 election campaign, Donald Trump has increased the amount of false or misleading claims, averaging 50 false claims a day. There were so many claims that the fact-checkers have not been able to track all of them and have raised questions about when to decide to fact-check. In the eye of the public, the 2020 election will partly be remembered by the rise of new actors, tools, and practices in coordination to re-imagine the intensifying fight against mis- and disinformation. For scholars and practitioners, questioning the role of these actors and tools is becoming even more pressing.

Since introducing the meaningful social interaction algorithm (MSI) in 2018 Facebook, has reduced the overall visibility of news content on their proprietary platform. At the same time, people have been provided with more opportunities for convening and interacting in private spaces, such as mobile chat applications like WhatsApp groups and Signal that are not public facing but rather so-called “dark social”. Creating tools to fight disinformation, journalists and fact-checkers are seeking to adapt by finding new ways to help communities make sense of mis- and disinformation. At the same time, scholars have sought to unpack these reconfigured power dynamics and how mis- and disinformation may be best tackled to have an impact on public perceptions of truth and truthfulness. But this election has intensified new forms of power dynamics in the disinformation ecosystem — and shown the hard limits of what fact-checking can accomplish without greater support from platform companies for the researchers, journalists, and fact-checkers seeking to understand and limit the spread of harmful misinformation.

The 2020 election shed light on dis- and mis-information becoming a beat particularly in legacy news organizations such as NBC, the New York Times, and the BBC. But these beats wouldn’t be legitimized without the help of platforms non-proprietary to them that dictate the ways in which information may be understood from and shared to the public. For example, since 2016, fact-checking organizations have been working with Facebook to reduce disinformation. And Facebook has been requiring these organizations to be signatories of the International Fact-Checking Network’s (IFCN) Code of Principles, which value the principle of transparency at each stage of the fact-checking process.

As a result, platform companies such as Twitter and Facebook have become “arbiters of truth” and that is “without the methodology or transparency,” a position that these companies have avoided since their inception. Social media oriented platform companies have been engaging in content moderation for a long time, but for disinformation they follow community principles rather than abide by policy or law (with a few countries making exceptions). Twitter has been fact-checking Trump, and our analysis shows that out of 71 tweets Trump posted or retweeted between November 4 and 7, 20 are marked or hidden by Twitter for breaking the company’s Civic Integrity Policy. But the transparency asked from the IFCN has not always been followed by these platform companies.

Disinformation has been intensifying during this election. They also show that the fight against disinformation has involved a plethora of actors seeking to “do their best in fighting disinformation” that have different incentives and complicate the public’s understanding of what might be the closest thing to facts.

When considering recent digital innovations in automated fact-checking for the identification, verification, and correction of disinformation, similar ethical questions come to mind. Such innovations do not cover the judgement and sensitivity that human fact-checkers deploy in their practice. As in the case of content moderation, there is limited explanation of and transparency about authoritative and open-sourced data.

In sum, the power to decide what is true is mediated through platform companies online as media organizations and diverse other actors essentially are taking advantage of platform affordances, and their inefficiency or unwillingness to actively moderate such activities. Fact-checkers and journalists alike, have sought to find a meaningful footing. Going forward, platforms companies should be held to the same transparency standard many fact-checking organizations embrace, by providing a clear explanation of the principles and methods they employ to combat mis- and dis-information. Sharing information with researchers and fact-checkers about the spread of problematic content on social networks, and about the effectiveness of different interventions, would provide a vital boost to their efforts.

In The Source Criticism and Mediated Disinformation (SCAM) project, we are seeking to look at key industry representatives in tech and platform companies, tech and media industry associations and fact-checking organizations around the world and examine how these actors experience the effects of digital technology on and develop new approaches to the critical evaluation of sources and information.