Reece is Strategy Director at Blue State — a full-service creative agency that’s worked with some of the world’s most known causes, companies and charities. Based in their London office, he specializes in audience insight, strategy and measurement. Over the years, he’s helped governments combat extremism and terrorism through understanding how narratives affect and divide communities.

Fake news is a problem. It’s attractive, spreads quickly, and divides communities. Know it or not, you’ve probably come across it: only 4% of Britons can tell the difference. 

In the run up to the election, for example, you might have seen the furore over whether a photograph of a boy on the floor of a hospital was staged; a photo, initially dismissed by Boris Johnson, which catapulted NHS funding to the heart of his campaign trail.

The idea stemmed from a couple of bad actors, likely to be politically affiliated, who posted fake ‘facts’ about it online, suggesting they knew sources that could prove it was staged. While it’s not fake or staged, fake news has caused people to wonder if it might be — and in the coming months, we’ll likely see just how much fake news like this circulated, which may have in some way impacted the election result.

If you’re confused, it’s understandable. Fake news is hard to spot, tough to stop, and is wired to get the most discerning of people to click. It’s a murky, complicated world – and that’s the intention. 

Knowing how and why it spreads is the first step in understanding why it’s become such a formidable force and how to stop it. Because the more we know about it, the better we can fight it and see it for what it is: a scam that’s making money, hurting people and has got most of us fooled.

In my work with online communities over the years, I’ve come to understand where the stories come from — the motivations behind them, and why they spread. If there’s one thing I’ve learned, it’s that there’s not a single answer: if there was, we’d have fixed it by now. 

The truth is, it’s become a monopoly — and the motives that sit behind it are as diverse as the fake angles being spewed out. Here’s our take on who writes it, why they write it and why it spreads.

Who writes it?

The sad truth is that it’s often a person behind a keyboard that’s typing the sensationalist stories we see in our social media feeds on a day-to-day basis, deliberately constructing untrue narratives. In most cases, fake news is written by humans, not machines. Often, they’re doing it to make money — or with a more malicious intent, of tapping into social divisions to fracture communities. 

But it’s not just humans. 

We’re living in a world where machines can write disinformation and automatically put them into the world for us to read. With so much data readily available about what gets us to click through on news articles online, artificial intelligence can now construct fake articles that sound legitimate, and even create videos — known as “deep fakes” — that serve the same purpose.

Why do they write it?

The people who write it are often engineering content that fits two specific purposes: to make them money, or to fracture communities. 

A recent Guardian investigation found a network of more than 20 Facebook pages was funnelling 1 million followers to 10 ad-heavy websites, where they were able to milk the traffic for profit. This isn’t a new tactic. Over the years I’ve encountered news aggregator sites doing the same thing — and have read interviews with people who say they make as much as $10,000 every month from writing fake news. It’s become a business of profit, where tapping into preexisting ideologies of controversial views has become a sure-fire way to drive traffic and make a quick buck.

What’s perhaps even more menacing are the fake news articles which have been written with the express intention of sowing discord and dividing people. Influential media houses like Spiked Online, Voices of Europe and Breitbart News are just some of the culprits who are known to proliferate divisive content — and with a heavily engaged consumer base, they tap into echo chambers of ideology and spread like wildfire.  

How does it spread?

Fake news travels six times faster than truth — and it’s in the sharing and spreading of fake news that it acquires its power. But pinning down the way it spreads to a single factor isn’t possible, because there are so many forces and approaches at play.

It’s a messy landscape of bots, people and tactics that create the spider web of fake news we see online today — and with so many approaches, it’s become something of a whack-a-mole to stop. 

Bots — or automated accounts — are a huge factor. They’ll share links to content as much as every second, muddying the waters by sharing lies and truth in equal measure. It’s a volume game, to get links to fake news out there in the hope the right people stumble across them, or to drown out a trending hashtag with a counter-narrative to try and reframe stories maliciously. A real, trending hashtag will be hijacked by bots so it becomes impossible to find the real content.

One example I traced involved an article published, presumably by a Russian source, ‘dubunking’ things around the Skripal case. It danced between fake accounts, being shared from bot to bot but securing little engagement, until a far right influencer stumbled across it and shared it… to his hundreds of thousands of followers. The article spiked hugely, and the fake narrative — steeped in no evidence whatsoever — took off.  

The truth is, it’s not always easy to spot a bot. They don’t all send out thousands of posts per day — and one of their many tactics can hoodwink us entirely. People are known to hack into real people’s social accounts, through data breaches or trickery, and use their profiles to share content. With networks believing it to come from their trusted friend, family member or connection, the content feels more legitimate.

And it’s not just something that takes place on Twitter: Facebook is a breeding ground for disinformation. For posts that have high engagement — a BBC article, for example — algorithms put the most interacted-with comments at the top of a comment thread. In this instance, I’ve looked at accounts sowing discord in prominent mainstream news comment threads, and found they’re often linked to a centralised agenda  — with shared profile images, and one identity. 

Why’s it so worrying?

Fake news isn’t just an article that pops up in your social media feed — it’s the start of a journey toward a point of view that many people have sadly bought into. This is the ultimate goal, in many cases: to create a community of fake news advocates, who can’t tell the difference between a truth or a lie anymore, but who engage with and share the content anyway because it aligns with the views they’ve developed — largely because of fake news articles in the first place. 

In private or semi-private groups across the web, there are people proactively sharing fake content — be it an anti-immigration group, or a conspiracy thread. The scale is hard to track, given their very nature — but they’re sizeable, self-sustaining and on the rise. 

These groups are dangerous because they’ve wholly bought into ideologies. Once people can confidently live part of their lives online, they start to seek self esteem and validation in this space. The sharing of an item of fake news, among people who share the same ideology, ultimately provides psychological validation.

It makes people feel special, creating a sense of community amongst people who believe what they are reading. Any inconvenient facts that get in the way of the defining group narrative are simply denounced, themselves, as fake news. If you think of it as a cult, then it’s easy to understand — people don’t want to join, but once in, it’s part of their identity. In these echo chambers, sharing fake news becomes a form of social currency.

What does all of this mean? 

Fake news is, ultimately, an attack on the internet and on humanity. There’s no one way to stop it, because it’s everywhere and it’s growing and spreading at a crazy rate. Millions of accounts tweeting every two seconds and groups of fanatical believers are a terrible combination. 

Both groups that use it want it to be an overwhelming, destabilising force — and are achieving it very well. Factor in new technology such as deep fakes, and you’re in a situation where anything can be denounced as fake — even the truth. 

You might think you’re immune, but most of us can’t reliably identify items of fake news. And it’s no surprise: the tactics and technology are so sophisticated that it’s tough to stay on top of it all.

How can we stop it?

Any attempt to stop fake news will need to work on several fronts.

The first step is to know that at an individual level, no one is invincible. It’s notoriously hard to spot, and as mentioned earlier, just 4% of people can tell the difference. 

Education, combined with blocking access for the worst offenders, is likely to be the best approach. However, it’s not going to be an easy fight — because once someone is defined by their beliefs, it’s hard to remove them from this community. Education early on in life is key — something some schools are already starting to do. 

Digital deplatforming can help too,  though this must be handled carefully as there’s always a risk of creating martyrs to a fake news cause, or pushing people into darker, encrypted communities where their spread of information can become harder to track.

Fact check websites are a great asset, especially when working with social platforms. It’s a big ask for individuals to go away and research the content they are sharing. But when fact checking platforms work with social media outlets — as Facebook did with the boy on the hospital floor story recently, working with Full Fact and labelling the resulting ‘news’ as fake — then the results can be powerful.

Do you have thoughts on this piece, or just want to chat? Contact us here.

Sign up for our newsletter

Enter your email address so we can notify you about new interviews, case studies, and more.