Why do we believe in fake new?

It’s commonplace to say that we’re all deluged by more information than we can possibly handle. Less commonplace is the acknowledgement that human judgements also rely upon secondary information that doesn’t come from any external source – and that offers one of the most powerful tools we possess for dealing with the deluge itself. This source is social information. Or, in other words: what we think other people are thinking.

Consider a simple scenario. You’re in a crowded theatre when, suddenly, people all around you start panicking and looking for an exit. What do you do, and why? Your senses inform you that other people are moving frantically. But it’s the social interpretation you put on this information that tells you what you most need to know: these people believe that something bad is happening, and this means you should probably be trying to escape too.

At least, that’s one possible interpretation. It may be the case that you, or they, are mistaken. Perhaps there’s been a false alarm, or part of the performance has been misunderstood. Reading social information accurately is an essential skill, and one most of us devote an immense amount of effort to practising. Indeed, wondering what’s going on inside someone else’s head is one of humanity’s greatest fascinations – alongside trying to influence it.

So far, so familiar. But the information suffusion of digital culture has introduced something new into this ancient psychological equation: a whole new level of reliance upon social information; and a whole new set of hazards and anxieties around errors, manipulation and cascades of influence.

Danish researchers Vincent F Hendricks and Pelle G Hansen give these tumultuous processes a name – an “information storm”, or infostorm, in the sense of a sudden and tempestuous flow of social information – and suggest an intriguing alternative to the narratives of human folly and unreason so often applied to fake news and tribal divisions online.

Rather than despairingly deciding that we now live in a post-truth era ruled by irrational forces, they argue in their book Infostorms, many of the digital world’s most fractious sites are in fact the results of perfectly rational decision-making by those involved – and originate not so much in human foolishness as in the nature of information environments themselves.

Consider the spread of an item of misinformation through a social network. Once a small number of people have shared it, anyone subsequently encountering that information will face what is at root a binary choice: is what they’re looking at true, or untrue? Assuming they have no first-hand knowledge of the claim, it’s theoretically possible for them to look it up elsewhere – a process of laborious verification that involves trawling through countless claims and counter-claims. They also, however, possess a far simpler method of evaluation, which is to ask what other people seem to think.

Information coming from computer (Credit: Getty Images)

When we are confronted with a blizzard of unfamiliar material, we often seek cues from other people as to what to believe

As Hendricks and Hansen put it, “when you don’t possess sufficient information to solve a given problem, or if you just don’t want to or have the time for processing it, then it can be rational to imitate others by way of social proof”. When we either know very little about something, or the information surrounding it is overwhelming, it makes excellent sense to look to others’ apparent beliefs as an indication of what is going on. In fact, this is often the most reasonable response, so long as we have good reason to believe that others have access to accurate information; and that what they seem to think and what they actually believe are the same.

The automation of this observation is one of the foundational insights of the online age. The great initial innovation of Google’s search engine was that – rather than attempting the impossible task of coming up with an original assessment of the quality and usefulness of every website in the world – users’ own actions and attitudes could become its key metric. By looking at how web pages linked to one another, Google’s PageRank algorithm put a proxy for content creators’ own attitudes at the heart of its evaluation process.

Online, the notions both of universally trusted sources and universally accessible announcements are problematic, to say the least

It may sound crushingly obvious today, but it’s worth pausing to consider how fundamental the measurement and management of social information is to almost every company seeking to turn trillions of bytes of online data into profit. User traffic, reviews, ratings, clicks, likes, sentiment analyses: what people are thought to be thinking makes the digital world go round. And these currencies of reputation, unlike money, are only enhanced by usage. The more they’re spent, the more their worth increases. The public signal is all.

How to handle an infostorm? In a social situation in the real world, a false consensus can be dispelled by publicly sharing trustworthy new information: an official announcement in our hypothetical crowded theatre; a confession of confusion by someone who started a rumour. Online, the notions both of universally trusted sources and universally accessible announcements are problematic, to say the least. Yet, work such as Hendricks and Hansens’s suggests that there’s hope to be found if we remember that the mechanisms involved are fundamentally agnostic about truth and untruth. Infostorms, like actual storms, are the products of climatic conditions – symptoms of something far larger. And different climates can produce very different results.

Person walking in cloud (Credit: Getty Images)

Infostorms, like actual storms, are the products of climatic conditions – symptoms of something far larger

Networks where members are, for example, randomly exposed to a range of views are less likely to experience cascades of unchallenged belief. The disproportionate significance of the first responses to a claim can be addressed by an engineered attention to sources, authenticity and provenance; and the public introduction of accurate information can, if a trusted source is involved, dispel false consensus.

Perhaps most significantly, the way in which social information ripples through a network can be understood in terms of rational reactions to uncertainty, rather than irrational impulses addressable only by further irrationalism. And the more we understand the chain of events that led someone towards a particular perspective, the more we understand what it might mean to arrive at other views – or, equally importantly, to sow the seeds of sceptical engagement.