From echo chambers to extreme opinions
How can we get better at seeing a useful spectrum of views online?
‘Surely people can’t be that extreme in their views?’
Often when I’ve mentioned a viewpoint I’ve seen on Twitter, particularly on the topic of epidemics or health, I’ve got a response like the above.
‘Surely nobody actually said that about [insert topic]?’
And it’s made me think about why this is. What have I been seeing that others might not be?
Perhaps it’s just a matter of numbers. In recent years, I’ve ended up with quite a lot of attention on some of my posts, which I realise has effectively given me a relatively large sample size when it comes to being exposed to others’ views.
Even so, does this mean I’m seeing the full picture?
Away from the echoes
When people talk about online interactions, it’s common to refer to ‘echo chambers’. The idea is that users are just exposed to content they already agree with. After all, that would explain why people end up so polarised online.
Unfortunately, the reality is more complex. One 2016 study analysed the (anonymised) web browsing habits of 50,000 Americans, and found that the information they encountered on social media and via search engines was indeed more polarised than the content they saw on their favourite news websites. But the social media and search engine content was also more diverse. People saw more of the ideologies that were opposite to their own – albeit the extreme end of the spectrum.
A later study, focusing on Twitter, suggested that exposure to opposing political viewpoints isn’t necessarily helpful for expanding a user’s outlook. When Republican participants followed a liberal Twitter bot, they ended up expressing substantially more conservative views afterwards. Similarly, Democrat users who followed a conservative Twitter bot also became more liberal afterwards (although the evidence was weaker for this effect).
Sampling from the tails
Getting people to follow bots is one thing, but how might the views we’re exposed to scale with the number of people we naturally interact with? Take the following (very) simple illustrative example. Suppose that in a given online interaction, we’re exposed to a random viewpoint from the possible distribution of views out there. If we interact with a single random user, the probability we’ll avoid encountering the most extreme 0.1% of views is by definition 99.9%.
So, if we have interactions with N random users, the probability we’ll avoid encountering any of these 0.1% views is:
Which gives us the following relationship between the number of people we randomly interact with and the probability we’ll see at least one view from the most extreme 0.1% (i.e. we won’t manage to avoid them):
In the course of 100 random interactions, we’d therefore have a 10% chance of encountering at least one view from the most extreme 0.1%. Increase that to 1000 interactions, and this rises to 63%. For 5000 random interactions, there’s a 99% chance of encountering the most extreme 0.1%.
This may also help explain why so many new Bluesky and Threads users say that these new platforms are ‘nicer’ than Twitter, but also a bit blander initially. Because the number of interactions are generally smaller, they’re not seeing so much of the extremely useful – or extremely unpleasant – content they did before. (Of course, the fact that these platforms don’t have their most prominent users actively amplifying racists no doubt helps too.)
So if random interactions make it hard to see the full distribution without an overwhelming volume of content, but active exposure to opposing viewpoints can backfire and make a platform less useable, how can we increase the number of usefully diverse opinion we’re exposed to online?
Engineering a better distribution
A few years ago, I decided to try and expand the range of viewpoints I saw on Twitter, so I followed several people I disagreed with on many topics. It was an often-frustrating experience, particularly when I’d followed commentators who were known for being provocative. I came to realise that one of the main reasons I found it frustrating was they’d frequently get the basics wrong (e.g. about scientific studies or reported statistics), which made it difficult to have much confidence in the rest of their argument.
Gradually, I’ve changed my approach to following people with whom I agree with on the basic reality, but might disagree with on interpretation or what to do about it. For example, you might agree with someone about the fatality risk of a certain disease, but disagree about optimal control strategy. Or agree about the long-term risks posed by climate change, but disagree about the immediate policy priorities for tackling it.
One metric that seems particularly helpful for seeing a useful wider distribution is interacting with people that don’t have an obvious ‘team’ they agree with on every single issue, particularly when subjective decisions are involved. If differing viewpoints are highly correlated at each end of the spectrum, then in turn these groups will often end up highly polarised. In my experience, this team-based dynamic can reduce the potential for constructive disagreement on difficult topics. The reason? There ends up being more focus on being ‘right’ – with the accompanying engagement it brings – than being useful.