top of page

Misinformation online: an infodemic



It’s no secret that social media platforms are transforming the way we consume information and form opinions. Ofcom’s 2019 news consumption report revealed that half of all UK adults get their news from social media, with more recent data suggesting this figure has risen in light of COVID-19. Equally, messaging apps like WhatsApp have become core networks for the circulation of news in non-Western countries: in Brazil, for example, 53% of people utilise WhatsApp for the exchange and discussion of news stories. Despite this, people are increasingly distrustful of the things that they see and hear on social media: 56% of adults around the world surveyed in 2020 said that they were concerned about what is real or fake online. In recent years, social media platforms have inadvertently become breeding grounds for fake news and misinformation, ranging from the rapid spread of conspiracy theories, to propaganda based on false information unfairly swinging the outcome of an election. In 2016, fake news plagued both the UK Brexit referendum and the U.S presidential election, whilst in recent months, conspiracy theories and misinformation about coronavirus have flooded social media.

So how have social media platforms become such influential misinformation hotbeds? Behavioural experts have shed light on how fake news ecosystems online tap into our human instinct. A 2016 Reuters Institute study investigating the news consumption patterns of 376 million Facebook users found that people tend to seek information that aligns with their own perspectives – it’s in our nature to pursue information that confirms our world views. Social media algorithms can exploit this facet of human nature by presenting us distorted or exaggerated information that allows us to confirm our own views. Oxford Dictionaries selected “post-truth” as the word of the year in 2016, defining it as “relating to circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

Psychologists also point to the Illusory Truth Effect: people only need to hear or read something three times before the brain starts to think it’s true – even if this information is false. So, if false rumours about an election candidate are circulated and shared online often enough, many social media users will begin to perceive it as true. This can influence the decisions we make, and manipulate our voting behaviours. 

So should we re-evaluate the way we consume information on the online? Fortunately, almost half (49%) of the leading tech experts surveyed by Pew Research believe that the threat of misinformation online will improve in the next decade, many of them predicting that innovators will develop tech-based solutions that will help combat fake news and misinformation. But for now, we can exercise agency in the content we digest by being mindful consumers. The Media Bias Fact Check website allows users to find out whether a particular news source is biased, whilst we can look out for warning signs in the information we consume online. Articles that spark intense feelings of anger, disgust or fear, stories that make extraordinary, sensationalist claims in their headlines, and posts riddled with spelling and grammatical errors (if it isn’t spell checked, it likely isn’t fact-checked) are all prime suspects for inaccuracies. 

This article was brought to you by Dig Detox. Our mission is to help people use technology safely because we believe health is your most valuable asset. Please visit www.digdetox.com for more articles, research and information about the movement.

By Effie Webb

University of Oxford

First Published on 7th June 2020

SOURCES:

Pew Research 

Ofcom

Thenextweb

The Reuters Institute 

The Decision Lab 

The Conversation

bottom of page