Over the past year, ‘fake news’ has become a near-ubiquitous term. Cultivated in the petri dish of the US Presidential Elections, this ‘digital virus’ has spread across the planet, infecting individuals and institutions en-masse. It's not that the dissemination of false information is a particularly recent phenomenon (it has been practised for centuries), but it has certainly been empowered by recent technological advancements – the most influential one being social media. Facebook, Twitter, and Google are currently embroiled in the struggle against the spread of harmful and misleading information to the many millions who have started using these sites to keep up with current events. The problem, however, doesn't lie in their algorithms but rather in the functioning of our brains. It's the way that we're wired to think, the way we accept and reject information, that is the root cause of the ‘fake news’ problem.
Image : shutterstock
Oxford dictionary declared ‘post-truth’ as the 2016 international Word of the Year. It is defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” One only needs to look at Donald Trump's popularity to see how this works. In the months preceding his election, and the ones following it as well, the mercurial Trump has been immensely successful in galvanising his supporters into believing increasingly outlandish claims. From wiretapping accusations to assertions of voter fraud, Donald Trump has spread no short number of inciting messages against his political rivals. And even after these unfounded and downright bizarre claims were denounced by fact-checkers, why do so many of his supporters continue to believe them? The susceptibility to fake news isn’t limited only to Trump supporters; it extends to every single internet user. And that’s due to a few psychological leanings that exist in each of us.
The psychology that makes ‘fake news’ work
Cognitive bias is a psychological tendency which causes people to accept only that information which supports their pre-existing beliefs. The human brain is not overly fond of exertion, and the effort required to dispel cognitive dissonance – the mental stress (discomfort) experienced by a person who simultaneously holds two or more contradictory beliefs, ideas, or values – is one that it actively avoids. Since information that complies with pre-existing beliefs is easier to digest, that is exactly what the mind does, rejecting any non-conforming information in the process. That is the first problem.
The second is the ‘echo chamber’ phenomenon created by social media. People on social media sites like Facebook and Twitter only follow those people and information sources, and hence only see that content which, reflects their beliefs and values. A 2015 study titled The spreading of misinformation online focused on Facebook users found that this exposure to selective content greatly facilitates the propagation of false information. Since sites like Facebook make use of algorithms to display relevant content to their users – that is content which users are most likely to read and engage with – it creates large ‘filter bubbles’ wherein people are never exposed to opposing points of view. Any information that opposes the view of this ‘echo chamber’, no matter how factually correct, is collectively, and hence individually, lambasted as false and misleading.
This issue is also compounded by a cultural phenomenon known as the ‘bandwagon effect’, which Wikipedia defines as “a phenomenon whereby the rate of uptake of beliefs, ideas, fads and trends increases the more that they have already been adopted by others.” Suppose, for example, that you think the implementation of universal basic income is a brilliant idea. If you arrived at this conclusion on your own, you will hold the belief provisionally, that is, you'll still be open to opposing viewpoints. But when you find out that several others share the same belief, your opinion gains an unshakeable certainty that is accompanied by a scorn for an opposing rhetoric.
Take the recent demonetisation in India for a real-world example. Prime Minister Modi’s move was widely hailed as a long-awaited attack on black money and corruption. The negative consequences of the hasty move were explained away as a necessary inconvenience and the majority of the people were in favour of the ‘masterstroke’. But all this happened even before the Government and the RBI released the official numbers regarding how much black money had been recovered. There was no proof that the demonetisation had achieved its proclaimed intent, and yet people where overwhelmingly in support of it. Those who weren’t were deemed anti-Modi and anti-progress. That begs the question: from where did these people get the information that the demonetisation had worked?
Who's the source?
The problem of ‘fake news’ doesn't exist solely because people are against opposing views or that they're only exposed to opinion-affirming information. It's the source of the news that is being consumed that also matters. Courtesy of our fast-paced lives, we have no time to verify the veracity of the information we're consuming. People usually just skim through the headlines on their social media and RSS feeds, forget cross-checking all the facts and events being cited by the countless blogs, articles, op-eds, and social media posts. Even open-minded people, ones who recognise and actively fight their confirmation bias, aren't particularly adept at identifying the falsity of seemingly credible news stories. If what we're reading, hearing, or watching appears to have some validity or merit to it, we assume it to be true, credible facts be damned. Dean Burnett, author of The Idiot Brain, does a fantastic job of explaining this problem in an article he published on The Guardian.
Also, we inherently tend to agree with views shared by our friends or those people whom we hold in a high regard. If a friend whom we trust shares a piece of false information with us, we will tend to believe it without checking its credibility. A larger downside is the danger of false information spreading like wildfire across the internet if someone with a large follower base shares it on social media.
The two tech giants of Facebook and Google have finally began their battle against ‘fake news’ but the onus for identifying false information largely lies on their users, not their algorithms. It would appear then that the creators and disseminators of this problem are the only ones who can solve it. And how well that plays out in the coming years is anyone’s guess.