Are you the sort of person who likes to spread false rumours about someone with cancer? Most people would answer ‘no’ to this question. And yet, earlier this year, some of their actions suggested the opposite. Following the Princess of Wales’ planned absence from public events, social media was flooded with a sea of conspiracy theories, which were then swiftly forgotten once the true reason emerged.
Some journalists got in on the act too, initially speculating wildly before adopting the role of an Attenborough-esque observer and lamenting how members of ‘the public’ had been spreading misinformation.
Why does speculation like this take off so easily? Several researchers have found that social information (i.e. about people and their interactions) has a transmission advantage, and study participants are more likely to remember and share stories about what people have done to other people. This could be a consequence of the theory that human intelligence evolved to deal with increasingly complex social interactions, and hence our brains value this type of information more.
Everyone can be a troll, but some are especially good at it
When it comes to the nastier end of online interactions, harmful behaviour isn’t necessarily limited to a small well-defined group. One 2017 study looked at ‘trolling’ – defined as behaviour outside the normal acceptable bounds of an online community – and concluded that ‘ordinary people can, under the right circumstances, behave like trolls’. In particular, the researchers found that people are more likely to become trolls when they’re in a bad mood, or other people in the conversation are already trolling. So much so that mood and context was better at predicting trolling behaviour than an individual’s historical tendency to be a troll.
Yet even if many users can become trolls, there is a subset of individuals online who are much more strategic in how they spread information. These manipulators often have a deeper understanding of what makes content go viral online compared to other users.
People often argue that extreme views would spread regardless of media amplification. However, studies that reconstruct the spread of online information have suggested the opposite: content rarely gets widespread traction without a large ‘broadcast event’ to boost it (i.e. amplification from someone with lots of followers). When an idea becomes popular, it’s usually because prominent figures and media outlets have played a role in spreading it, either intentionally or unintentionally. On Twitter, bots sharing untrustworthy information have often targeted users with large followings.
By strategically targeting and amplifying their messages, manipulators can give the false impression of broad public support for particular policies or political stances. In marketing, this tactic is known as ‘astroturfing’, because it gives the illusion of genuine grassroots support. It’s a form of information laundering; if something takes off, most people will see the message reported by a familiar source rather than an anonymous social media account.
Defending against manipulation
The evolving landscape of journalism has made it more challenging to fend off these media manipulators. As more outlets focus on gaining online shares and clicks, they become vulnerable to people who can spread attention-grabbing ideas. From a purely technological standpoint, many of these manipulators aren’t disrupting social media – they’re just following its contagion-led incentives.
Unfortunately, there have been several high-profile examples of harmful engineered amplification. For example, following the 2019 mosque shootings in Christchurch, New Zealand, several outlets disregarded well-established guidelines for reporting on terrorist attacks1. Many revealed the shooter’s name, elaborated on his ideology, or even shared his video and linked to his manifesto. Worse, this information spread easily; the stories most widely shared on Facebook tended to be the ones that violated reporting guidelines.
In her 2018 report ‘The Oxygen of Amplification’, Whitney Phillips outlined several other examples of media manipulation efforts, and reflections from the journalists who have generated clicks and coverage while also amplifying extreme views. Take the following quote from a reporter at a major media outlet:
“The people I’m covering are some of the worst people I’ve ever met, their attitudes are despicable, I feel like they’re getting less resistance from the culture and system and I feel like something really bad is coming down the line,” he said, before pausing. “It’s really good for me, but really bad for the country.”
In the report, Phillips outlined a number of recommendations for reducing the influence of manipulators. These included weaving the performative nature of manipulators’ actions into the story, avoiding deferring to manipulators’ chosen language, explanations, or justifications, and reflecting on whether the story absolutely requires quotes from manipulators.
When I was researching The Rules of Contagion, I was struck by both the scale and inherently contagious nature of the challenge Phillips was describing. ‘As soon as you’re reporting on a particular hoax or some other media manipulation effort, you’re legitimising it,’ she told me, ‘and you’re essentially providing a blueprint for what somebody down the road knows is going to work’.
From 2018 to 2024
Last month, there was a series of brief follow up articles to that 2018 report. In one piece, Phillips noted that the media landscape is even more fractured now than in 2018, with more and more getting their information from non-mainstream news sources (e.g. TikTok accounts). She also suggested that some outlets learnt the lessons of the 2016 US election arguably too well, limiting their reporting of new nonsense claims by Trump, but also reducing the ability of their audiences to know he’s making them. (Unlike the 2016 election, when Trump was an initially an outsider who gained an estimated $2bn worth of free mainstream media coverage, he’s now an ex-President and the current Republican nominee.)
In addition, Philipps reflected on the framing of manipulation efforts as ‘disinformation’, and the partisan reaction this can elicit, particularly among those who feel ‘shunned’ by many of those in public life:
If you are someone who aligns with institutional consensus — the norms, values, and standards of evidence codified within mainstream journalism, academia, and government — to call something “disinformation” is to perform a kind of sympathetic magic on yourself. For many conservative audiences, your use of that word transforms you into a liberal: one of the shunners.
In a linked article, Sareeta Amrute makes the point that these challenges reach beyond simply defining whether a claim is true or not. As she suggests: ‘misinformation and disinformation are misnomers to the degree that they sustain focus on the veracity of information and detract attention from institutional legacies of trust and mistrust’.
In other words, it’s not just about whether people think that an election has been rigged – or that a princess has been substituted for a body double. Nor is it just a matter of social information being more transmissible, and harmful behaviour propagating more easily in the presence of similar actions or targeted amplification. It’s also about why someone might be susceptible to a certain idea in the first place.
Cover image: Jamie Street via Unsplash
As PM Jacinda Ardern explained at the time: ‘He is a terrorist; he is a criminal; he is an extremist. But he will, when I speak, be nameless. He sought many things from his act of terror, but one was notoriety. And that is why you will never hear me mention his name.
Interesting thought as always from you Adam. Thank you.