Jennifer Kavanagh and Stijn Hoorens / Apr 2018
Image: Shutterstock
Disinformation and “fake news” happened long before the Internet. You can trace the phenomenon back nearly 100 years, from the Nazi and Soviet propaganda machines of the 1930s to the emergence of “new journalism” in the 1960s and 70s that conveyed writers’ subjective expressions and blurred the lines between fact and opinion.
Yet even if we have seen it before, a new public opinion survey shows that a large majority of Europeans are concerned about modern day “fake news” and consider it a danger to democracy. Over the past year, much of the onus has been put on social media platforms and companies like Google to get their ‘house in order’, both in terms of stopping the spread of disinformation and educating its users to identify disinformation online.
Some countries have already taken stringent actions. Germany, for instance, introduced its “hate speech law” that allows for the fining of media platforms that facilitate incorrect, xenophobic or terrorist content. In France, Macron’s government is preparing a “fake news law” that offers judges the opportunity to take false news content offline during election campaigns.
More recently, the so-called “High-Level Expert Group on Fake News and Disinformation” urged the EU to take a tougher stance on platforms and aggregators over articles that spread disinformation. The group’s report also noted that the tech companies should be providing media and information literacy that helps the public to identify any disinformation online.
However, is there too much emphasis on the tech companies and not enough focus on the users themselves? Approaches that focus on regulation and enforcement could only serve to alleviate the public from accepting responsibility for their actions—in terms of the choices they make about which sources of information to consume and which to believe, and through the steps they take to determine what is true and what is false.
RAND Corporation’s report on ‘Truth Decay’, which analyses the erosion of respect for facts and evidence in political life and public discourse, takes the view that people’s inherent cognitive biases have a deciding role in what facts they choose to believe. For instance, humans tend to hold onto prior beliefs even when presented with information that clearly demonstrates that these beliefs are incorrect or misguided. In fact, when confronted with corrective information, some people can become even more convinced of their original belief and less willing to consider alternatives. This might offer one explanation for why “fake news” tends to survive and why some aspects of disinformation could prove difficult to overcome.
Polarisation, whether political or demographic, can reinforce these biases by creating two or more opposing sides, each with its own narrative, worldview, and even facts. Several European elections, most recently in Italy, have seen those on the far-left – such as the Five Star Movement – or far-right – such as Lega Nord or Forza Italia – gaining in popularity. Very polarised groups can become insular in their thinking and communication. In such closed environments, it is easy for false information to proliferate and become ingrained as people are not forced to confront information that challenges their beliefs or to interact with people who have different beliefs and backgrounds.
It is in the context of these factors inherent to human behaviour that tech companies have helped to facilitate disinformation. Firstly, the rise of the Internet and social media platforms have drastically increased the volume of information available and speed with which it can be disseminated. After investigating 126,000 rumours and false new stories on Twitter, researchers at the Massachusetts Institute of Technology came to the conclusion that these travelled faster and reached more people than the truth.
Secondly, tech firms have enabled the creation of online ‘echo chambers’ – where social media users are more likely to engage with people and sources that share their beliefs and worldviews – which has helped to reinforce people’s own cognitive biases. These are further reinforced by the algorithms of social media platforms and search engines that feed users with stories tailored to their profiles and based on past behaviour. Research from the think-tank Demos appears to confirm the existence of an echo chamber effect in European politics. The report suggests a strong connection between a user’s ideology and the other users and news sources they interact with online.
While tech companies can be part of a solution to disinformation, forcing them to ban disinformation or to offer greater transparency is not enough. Ultimately, people can decide what they read, what they believe and what they share. Regardless of the algorithms happening behind the scenes, the personal experiences, ideologies and choices of the users will end up shaping their online experiences. Information consumers in a democracy have a responsibility to understand how their biases affect their interpretation of information and even their choices about which sources of information they seek out and believe.
Disinformation and “fake news” had a big impact on society long before Google, Facebook and Twitter existed. It is as much of an offline problem as an online problem. Authorities can continue to seek to punish the tech companies for the circulation of false articles. However, this is unlikely to make a difference until more people take the time and acquire the skills to distinguish between fact and fiction.