People believe in fake news. It doesn’t matter how absurd they can sound, or if they show no reliable background whatsoever: readers and watchers around the world will still share extraordinary conspiracies on their social media profiles, thus creating potential echo chambers that can resonate around the globe.
This process kept happening at an even faster pace during the last couple of months marked by the Covid-19 pandemic, monthslong lockdowns and a tendency to mistrust scientists and official guidelines.
The question, then, is why. Why do people believe in misinformation, and rely on it to shape their understanding of the world? Which mechanisms are activated when someone gives credit to a piece of news that others would consider absurd?
The answer lies in our brain neurons, or somewhere in between them.
Pagella Politica and Facta partnered with De Facto – a project funded by the European Union, designed and managed by Bulgarian researchers at NTCenter – to conduct a collaborative investigation aimed at exploring how our cognitive layers contribute to the way we manage misinformation, and make us believe or deny the claims we interact with on a daily basis.
Relevant material about this investigation was gathered through Truly Media, a collaborative platform used to verify digital content.
The neuroscience or disinformation
Neuroscience is the anatomical basis of pretty much everything we do. In our brain, billions of neurons constantly interact with each other in order to channel different chemicals and transport stimuli and information.
Even without going into specifics about the mysteries of the human mind, the consequences for disinformation are quite straightforward.
The first thing to keep in mind – no pun intended – is that our brain is wired so that repeated information tends to travel through the same route, over and over again. Therefore, when we get in touch with something that is consistent with what we already know and believe, that piece of information will strengthen the existing path and help make it wider and stronger.
See it this way: inside the brain, information that corroborates our beliefs travel through highways, while many hints that somehow go against what we consider to be true take much smaller, unbeaten roads. Or, they are pushed to the side road and are discarded straight away. The very first time we interact with something that is new to us the route for that information doesn’t even exist yet, and the element could be entirely discarded.
This mechanism explains something crucial about our relationship with facts and information: we are stubborn creatures. Unconsciously, we stick to our beliefs and easily discard anything that does not fit into that model.
The limitations of fact-checking
What follows from this concept is an intrinsic limitation of fact-checking. No matter how solid the evidence is, or how clearly it is presented: a fact-checking analysis will only be effective if the brain of the reader is open to receiving new information, and to operate a change of course in the usual dynamics that govern the “road network” built by its neurons.
Otherwise, people will just mark the analysis as “fake,” “manipulated,” or hijacked by some mysterious, higher powers that pull the strings of the system we live in.
This also helps explain why the editorial team at Facta keeps receiving messages such as «How can you defend someone like Bill Gates?» or «You want to lead us into a dictatorship of thought!» Even though Facta’s analyses are based on solid evidence and pure facts, some readers may not be ready to process a new perspective which is just too distant from their customary interpretative paths.
The way our brain works can tell us a lot about disinformation. Four mechanisms, in particular, are useful to explain the cognitive logic that lies behind it.
The four cognitive mechanisms behind disinformation
The research team at De Facto identified four main cognitive layers that can help us to understand why people believe in fake news and discard instead more accurate analysis.
These are motivated cognition, frames as thinking contexts, equivalency and emphasis frames, and systemic causality.
The first main pillar is motivated cognition. This has to do with how we perceive ourselves in relation to others, and it determines how much we are influenced by the information we receive.
Examples of motivated cognition can be the fact that we generally tend to trust people from our social circle, with whom we share ideas and perspectives, or that we consider a report as trustworthy if it comes from a source we deem authoritative, such as an eminent scientist or a government representative.
Motivated cognition can also be applied to the world of social media. When we read something online, we are less likely to raise doubts about it if it comes from a verified account, with a professional-looking picture and a good description. An anonymous profile, or one with a generic picture, should instead make us think twice about its reliability as a source. In the same way, the number of shares, likes, and comments all contribute to shaping the credibility of an online source.
This mechanism can be exploited by those who craft disinformation. It is in fact a common technique for fake-news creators to start from official documents released by governments, authorities or respected agencies, and wittily change or reinterpret a few lines that can completely alter the real meaning of the whole document.
We can see this mechanism in several stories debunked by Facta. A fake news that circulated online last May, for instance, claimed that an official document released by the Italian Health Ministry was prohibiting autopsies on Covid-19 deceased patients. The document was real, but it was intentionally misinterpreted in order to convey a twisted message.
Another instance was a fabricated document, allegedly signed by the Italian Prime Minister in order to extend the state of emergency in the country until Spring 2021. In both cases, the use of headed papers and the bureaucratic, formal writing style were exploited to make false information look trustworthy.
Through the platform Truly Media, researchers at De Facto isolated another example of motivated cognition in action. On March 25, the website Oasi Sana published an article with the headline: «Now they say it even at the European Parliament: “The 5G accelerated the Covid-19 pandemic!”». The preview image users get when the post is shared on social media, furthermore, shows a group of MPs sitting at their desks. Both the title and the thumbnail drive on motivated cognition, as they convey the message that a respectable authority such as the European Parliament is endorsing the content of the article. Truth is, the EP has officially denied any connection between Sars-CoV-2 and 5G.
Frames as thinking context
The second key concept is the one of framing. Whenever we receive information, or just think about something, our mind uses some predefined conceptual schemes to contextualize it and determine whether we agree or disagree with the matter, praise it or condemn it.
An important element to notice is that frames are highly subjective and mostly depend on people’s individual backgrounds, experiences, and social habits. Whenever we learn something that strengthens our existing frames, then, it will be generally considered true and worthy of being stored in our memory. If we get in touch with information that goes against our frames, instead, our mind will likely discard it.
The consequences for disinformation are immediate: whenever we read something that fits in our existing frames, we will believe it and the information will be stored to support our perspective about that topic. Contrasting factors are likely to be discarded and left without further consideration, in order to avoid cognitive dissonance within the frame.
Think about migrants: an increasingly common narrative, mostly supported by right-wing media, tends to identify migrants that try to reach Europe by sea as generally “bad” or “dangerous.” People who slowly adopted and built upon this framing by constantly repeating and reinforcing it, then, will easily believe that «42 migrants disembarked in Umbria», even though Umbria has no access to the sea, or that the American rapper Drake is a «spokesperson» for migrants who advocates for the imprisonment of the Italian politician Matteo Salvini. On the other hand, people who developed different framing towards migrants will likely be skeptical when reading these same claims.
Another example of framing drawn from Truly Media is a video called «5G radiations make Covid-19 more intense?» and published by kla.tv, an online tv channel based in Switzerland which News Guard labeled as unreliable. The thumbnail of the video merges the image of a man coughing and wearing gloves and a heavy jacket – the general framing of winter as the season of cold and flu – and visually links it to a 5G antenna, thus creating an unconscious connection between the two factors.
Equivalency and emphasis frames
Diving deeper into the concept of frames, the “equivalency” or “emphasis” ones are particularly relevant when talking about disinformation.
This category refers to how information is communicated to the public: we will probably develop different ideas about Covid-19 if we are told that, on October 3, in Italy at least than 322.000 people have been infected since the beginning of the pandemic, or that the amount corresponds to barely 5% of the total number of people that were tested. The data upon which the two claims are based are exactly the same, but the framing leads to different reactions.
Equivalency frames can be useful to debunk claims that are generally misleading. While analyzing a video that accused Italian hospitals of forcing women to wear protective masks during childbirth, for example, Facta partly confirmed the requirement but also explained that it was being enforced in a number of other countries such as France, Spain, Canada, and Japan.
Similarly, Pagella Politica confirmed Matteo Salvini’s statement about the fact that Italian kids need to wear masks at school, but also highlighted that this protocol was common in several European countries.
The last cognitive layer that can be useful to understand disinformation is systemic causality. In order to understand what it means we can compare it with direct causality, a basic mechanism we all unconsciously employ to learn new things and make sense of the world.
Generally, through direct causality we are used to learning a correlation between two actions because we physically experience one as a cause and the other as its consequence: if a baby cries, he or she hopes to be fed. If I turn a key in a lock, the door will open.
There are concepts or mechanisms, though, that cannot be witnessed immediately: racism, for example, but also operations such as the way a wireless connection works or the effects of medicines on our body.
According to De Facto, «most of the disinformation which is propagated through online media is not binary in nature (yes/no, white/black,) but systemic».
A clear example of this is related to climate change. During the past couple of years the United States President Donald Trump tweeted several times that, since the weather was generally cold, then climate change couldn’t be so bad. This is a classic application of direct causality: the weather is cold, therefore there is nothing to worry about. Of course, climate change is a much more complex issue that can’t be explained through immediate cause-effect connections.
Another instance can be found on Truly Media. Back in July, the American epidemiologist Eric Feigl-Ding retweeted a post which claimed: «If you accept masks you’ll accept track and trace, if you accept track and trace you’ll accept testing, if you accept testing you’ll accept the vaccine…» and so on. De Facto noticed that, in this case, systemic causality has been replaced by wrongly attributed direct cause-effect and bogus science. There is no direct connection between wearing a protective mask and being in favour of tracing, but the link has nevertheless been used to support conspiracy theories.
Disinformation is triggered by a series of cognitive layers, which lead some people to believe fake news and others to interact with it from a more critical standpoint, raising questions and doubts. Understanding how these mechanisms work can protect us against disinformation, and guide journalists and fact-checkers through the creation of more effective content.
Like everything in science, the research process is not complete yet and it is open to further clarifications and discoveries.
This is a collaborative investigation written by Pagella Politica and Facta. The scientific framework has been provided by the De Facto research team. The platform Truly Media has also been used to gather relevant material.