LUISS Data Lab: The difference between Disinformation and Fact Checking

About a few days ago, our partners from LUISS Data Lab released the article ‘The difference between ‘Disinformation and Fact Checking: Everyone is entitled to his own opinion, but not to his own facts’, focusing on both disinformation and misinformation, and furthermore on the importance of fact-checking.

You can read the article below.

The difference between Disinformation and Fact Checking: Everyone is entitled to his own opinion, but not to his own facts

U.S. Senator Daniel Patrick Moynihan’s famous quote – sometimes attributed to the economist James R. Schlesinger – no longer applies in the digital age. From political elections to sport, healthcare, wars and cultures, public opinion increasingly clusters in closed, often angry, communities, clinging to “their own facts”. In this perspective, the 2016 US presidential election veritably marked the transition from an age of ‘post-trust’ (Löfstedt, 2005), to an era of ‘post-truth’ (Higgins, 2016), with contemporary advanced society experiencing a common rise not just in anti-scientific thinking, but in all manner of              reactionary obscurantism, from online conspiracy theories to the much-discussed “death of           expertise” (Nichols, 2017). At the same time when Time magazine asked on its cover “Is Truth dead?”, the Oxford dictionaries declared ‘post-truth’ word of the year in 2016, reflecting “circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal beliefs”​ (en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016). By “disinformation” is meant, according to various definitions:

  • false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth ​(https://www.merriam-webster.com/dictionary/disinformation)
  • false information, as about a country’s military strength or plans, disseminated by a government or intelligence agency in a hostile act of tactical political subversion: Soviet disinformation drove a wedge between the United States and its Indonesian allies.
  • deliberately misleading or biased information; manipulated narrative or facts; propaganda: ​Special interest groups muddied the waters of the debate, spreading disinformation on social media. (https://www.dictionary.com/browse/disinformation)

​ By the term “fact checking” we mean, instead​ (https://www.dictionary.com/browse/fact-check)​:

  • to confirm the truth of (an assertion made in speech or writing), often as part of the research or editorial process.
  • the action of confirming the truth of an assertion made in speech or writing: ​A fact-check of the senator’s speech revealed several misleading comments about his opponent.

The phenomenon of disinformation

The issue of disinformation and misinformation recently became of great importance for society. Lionel Barber from the Financial Times summed up the challenge (FT 16 September 2017) as follows: “Fake news damages public trust in news media. Fake news undermines public confidence in our democratic discourse. Fake news exacerbates economic pressures   facing quality news organisations. Finally, and perhaps most importantly, fake news highlights issues of responsibility and regulation in our fast-evolving media ecosystem.”

We are in a revolutionary era for the dissemination of information and knowledge. An era, as data journalist Nate Silver wrote in his seminal essay “The Signal and The Noise” (Silver, 2012), which dwarfs the information revolution ignited in the 15th century with the launch of mobile printing. The great amount of inputs available every day can be a harm and not a benefit to the human being if this is not able to distinguish the signal from what constitutes background noise. Borrowing the terminology of Brody and Meier (Brody and Meier, 2018), what we commonly call fake news is an especially dangerous form of​adversarial noise that affects the information channel and is hard to eradicate. News consumers receive incomplete and often contradictory pieces of information, some of which are deliberately            conceived to steer them away from truth, and their awareness and attitude with respect to this noise significantly impacts on their ability to filter it. The long-standing debate about the relationship between media and democracy has been reinvigorated, reversing the initial euphoria about Internet and social media’s ability to deepen democratic functioning through new channels for public participation and debate (Lévy 2002; Norris 2001). On the contrary, there is now widespread concern in many segments of society that social media may instead be undermining the quality of democracy (Tucker et al. 2018), with reference to its fundamental dimensions (Diamond and Morlino, 2005).

The transition from traditional mainstream media world to the noisy digital infosphere has been, so far, addressed as an industrial or technological issue, calling in question the media business model. Alas, “the spread of false information is one of the most dangerous trends of our age” (World Economic Forum, 2014) and the problem we are tackling calls in question the very foundations of our culture and values. Can democracies, open societies, free communities survive, if the flow of information is tainted by fake news, polluted by artificially constructed data, blinded by politically designed narratives? Sociologist Jurgen Habermas stated in 1972 that democracies need a shared sphere of information, leading to a “critical public opinion”. Digital philosopher Luciano Floridi draws the same conclusions today on digital media. The decline of traditional media will be, eventually, contained with new tools online, but how can a free world survive without a set of shared, reliable, fair news and information? These are the cultural, political, technological and ethical issues we are discussing now.

The efforts deployed by major social media platforms seem insufficient for curbing the illicit use of their functionalities. In 2017, Twitter expressed an alarmingly shallow stance towards bots and disinformation, stating that bots are a “positive and vital tool” and that “Twitter’s open and real-time nature is by definition a powerful antidote to the spreading of false information, and that [thanks to Twitter] journalists, experts and engaged citizens can correct and challenge public discourse in seconds” (Crowell, 2017). Albeit Twitter claims a practical and financial commitment to recognizing and preventing any malicious use of its services, its official position is to keep internal ongoing research on the matter confidential. In its “Report on Information Operations” (Weedon, Nuland, and Stamos 2017) Facebook analyzed collected evidence of actions taken to deliberately distort domestic or foreign political sentiment and discussion. By their own admission, researchers at Facebook are overlooking the content the accounts are publishing to only focus on monitoring users’ activity for verifying account authenticity. They assert that manipulations by malicious actors’ accounts for “less than one-tenth of a percent of the total reach of civic content on Facebook”. However, as many as 60M bots have been estimated to infest Facebook (compared to Twitter’s 14M)​( https://www.merriam-webster.com/dictionary/disinformation) ​, and prior to the latest French Presidential elections Facebook shut down over 30K fake accounts (https://www.dictionary.com/browse/disinformation)​.

In a Policy Forum article appearing on Science in March 2018 (Lazer et al., 2018), Lazer and more than 15 other international experts describe and review the “science of fake news”. What emerges from their work is that, despite many communication, cognitive, social, political and computer scientists recently joined the fight against digital dis- and mis-information, the research community is still far from stemming the viral diffusion of fake news. Lazer and his co-authors recognize the urgency of a multidisciplinary effort             involving private and public institutions as well as social media platforms. On the one hand, they suggest to work towards empowering individuals to recognize fake news; on the other hand, they foster the introduction of structural changes and the definition of new algorithms. Despite the many efforts and some (partial) successes, technical solutions currently struggle in preventing disinformation to enter the news stream.

Fact checking

The fake news epidemic has been, so far, intractable to all countermeasures. Technical tools and political strategies deployed against the disruption of our civic conversations debunking, AI, sources evaluation, social media platforms’ self checking, code of conducts, governments ad hoc rules, quality journalism boosting – while useful, have not stemmed the growing menace of tainted information. Fake news is indeed the symptom, not the real disease we are suffering. They fester in the moral immunodeficiency induced by the lack of trust in authorities, political and the nihilism crippling the sphere of our public opinion. We now enjoy the most powerful library of knowledge in the history, yet millions of individuals               shut themselves out in the dimension of weaponized lies. As a listener sitting at the end of a noisy channel, any consumer who is not direct witness of a “fact” is fed with a set of incomplete and contradictory pieces of information, and it is her duty/responsibility to come up with the best possible estimate for the truth value of that fact. The advent of social media and the diffusion of a peer-to-peer information model, laid the basis for the outbreak of a sort of “adversarial noise”, which we commonly call fake news. Fake news is in fact noise that is not “random”, but oriented so as to steer the public opinion away from truth.

Many organizations, from large editorial groups to small blogs, are currently trying to contrast disinformation by means of fake news debunking. The number of active fact-checking organizations almost tripled between 2014 and 2017 (Hassan et al., 2017), and specialized websites such as PolitiFact​ are drawing ever increasing attention. Fact-checking is usually done manually by trained professionals, to the detriment of scalability. To try to keep up with the rate to which misinformation is produced and shared nowadays, researchers are currently working on automating the fact-checking process. In 2016 Full Fact ​, a UK-based independent fact-checking charity, produced a white paper entitled “The State of Automated Fact checking” which includes both a survey and a roadmap for research in the field. The authors argue that using currently available technologies fact-checking could be dramatically sped-up, but that this requires global collaboration, open standards, shared design principles and infrastructures, and continuous research into machine learning. However, the task has proved to be very challenging and despite a few relevant tools and techniques at the moment there exists no full-fledged automated fact-checking system.

References

  1. Higgins, K. (2016). “Post-truth: a guide for the perplexed”. In: Nature News 540.7631, p. 9.
  2. Löfstedt, R. (2005). Risk management in post-trust societies. Springer.
  3. Nichols, T. (2017). The death of expertise: The campaign against established knowledge and why it matters.
  4. Silver, N. (2012). The signal and the noise: why so many predictions fail–but some don’t. Penguin.
  5. Brody, D. C. and D. M. Meier (2018). “How to model fake news”. In: arXiv preprint arXiv:1809.00964.
  6. Lévy, P. (2002). “Cyberdémocratie: essai de philosophie politique, Paris: Ed”. In: Odile Jacob.
  7. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide. Cambridge University Press.
  8. Tucker, J. et al. (2018). “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature”.
  9. Diamond, L. and L. Morlino (2005). Assessing the quality of democracy. JHU Press.
  10. Crowell, C. (2017). “Our approach to bots & misinformation”. In: Twitter Public Policy. June.
  11. Weedon, J., W. Nuland, and A. Stamos (2017). “Information operations and Facebook”.
  12. Lazer, D. M. et al. (2018). “The science of fake news”. In: Science 359.6380, pp. 1094–1096.
  13. Hassan, N. et al. (2017). “ClaimBuster: the first-ever end-to-end fact-checking system”. In: Proceedings of the VLDB Endowment 10.12, pp. 1945–1948
  14. https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016
  15. https://www.merriam-webster.com/dictionary/disinformation
  16. https://www.dictionary.com/browse/fact-check
  17. https://www.dictionary.com/browse/disinformation