In recent years, I’ve had the strange experience of researching a book that covers the spread of harmful information online, then subsequently finding myself on the receiving end of many of the toxic tactics I wrote about. But I’ve also come to realise that some of these tactics aren’t as well known as they should be, and understanding the sorts of bad faith approaches are out there can help put those who want good evidence-based discussions in a better position to deal with them. So below is a brief summary of some of the things I’ve learnt in recent years.
The three Fs
In her work on online censorship, Margaret Roberts identifies three common strategies used to suppress unwanted information: fear, flooding and friction. If there is a threat of repercussions for rule breaking, the resulting fear can suppress action. If censors flood online platforms with the opposing viewpoint, it can drown out contrasting messages. And if content is removed or difficult to post, it creates friction by slowing down access to information.
We see similar approaches employed by users who want to suppress evidence-based discussions online. The fear of an overwhelming, abusive backlash can deter researchers from getting involved in the first place. Meanwhile, flooding with nonsense posts and false claims can create a volume of content that is too large to debunk one-by-one, and can drown out any efforts to correct the record. Finally, friction – such as the use of quote tweets, disabled replies and locked accounts – can disrupt the flow of discussion, and make it hard to communicate a cohesive counterargument.
Sea lioning
A common online flooding tactic is known as ‘sea lioning’. This involves a user repeatedly requesting evidence or explanations, in a seemingly polite manner, with the intent of exhausting or frustrating their opponent, rather than having a genuinely constructive discussion.
For example, if a scientist highlights the risks of climate change, an online user might say ‘Can you show me exactly how much CO₂ is being emitted and how you know it's causing climate change?’ Then, if the scientist points to IPCC reports and consensus evidence, the user might follow up with ‘But how do you know those studies are accurate? Can you provide a summary of all those studies and explain how each one proves this point?’ If the scientist refuses or dismisses the unrealistic requests, the user might then say ‘Why won't you answer my simple questions? Are you hiding something?’
A related flooding tactic is known as the ‘Gish gallop’, where lots of points of varying validity are raised in rapid succession, making it very difficult to respond effectively. For example, a gish galloper might say something like the following:
Climate change isn’t real. First, the Earth’s temperature has fluctuated naturally for millions of years, and there’s no proof that humans are causing any warming. Plus, CO₂ is actually good for plants, so more CO₂ is beneficial, not harmful. Also, climate scientists predicted a new Ice Age in the 1970s, and that never happened. Volcanoes emit way more CO₂ than humans ever could, and the so-called ‘hockey stick’ graph has been debunked. And remember, the data from climate models are unreliable and often tweaked to show warming where there is none. And if climate change were really happening, why do we still have record cold temperatures?
Sounds familiar? We’ll look more at how to respond further down, but first, let’s run through a couple of other common tactics.
Motte-and-bailey arguments
One way people can introduce friction is with shifting arguments. A popular tactic here is the ‘motte-and-bailey’ approach. Named after the medieval castle design, someone will begin with an extreme and difficult to defend position (the ‘bailey’) then retreat to a less controversial and easier to defend argument (the ‘motte’). For example, someone might make a false claim that a vaccine regularly comes with very harmful side effects, then when challenged they’ll argue that they’re just highlighting that vaccines aren’t always perfect, and people should have a choice.
Poisoning the well
While we’re on the topic of castles, another common method is to ‘poison the well’ in a debate, by trying to discredit the source before they can make an argument. For example, someone might state that ‘the medical establishment have got things wrong in the past, so scientists can’t be trusted, regardless of what they’re about to say’.
How to respond
I’ve found one of the most important steps in dealing with online bad faith tactics is to realise they’re being used in the first place, and what they’re trying to achieve. Below are a few approaches I’ve found helpful for when you discover you’re not in a genuine good faith discussion.
Remember the theatre. Many toxic online tactics are designed to trigger a frustrated or defensive response, ideally with the target saying something under pressure that they’ll later regret. ‘O Lord make my enemies ridiculous,’ as Voltaire once said. With a bad faith user, you might not have any chance of persuading them or ‘winning’ an argument, but your responses may be seen by others, and could help in shaping what they think about the issue. So, remember that you’re in effect on a stage whenever you use public social media.
Call out the tactic. Onlookers seeing a discussion online may not recognise the tactics being used, so it can be helpful to state what is happening. For example, faced with a sea lion, you might point out that it seems they’re more interested in asking lots of questions than listening to the answers and the information provided. Or highlight that a motte-and-bailey proponent is retreating from a weak to safe position. Or emphasise that quoting out of context is not a constructive form of discussion, and that you won’t engage with abusive messages.
Stay focused. Fear, flooding and friction all aim to disrupt access to information. In the case of scientific debates, this can reduce the ability of wider audiences to follow the conversation or interpret evidence. So, it can help to remain focused on key points, no matter how much a minority of users may want to drag the discussion elsewhere. Don’t get dragged into trying to defend tangential points – or unrelated people/institutions – rather than addressing the core topic of debate.
Of course, this is far from a comprehensive list, and I don’t want to detract from how tiring and difficult it can be to communicate science in a hostile online environment. But hopefully it at least provides a starting point for those who might have to deal with bad faith discussions online.
Cover image: Wikimedia commons
And another one I’ve experienced, as a woman, is to comment on my appearance as a way of discrediting me as if how I look has any reflection on what I say.
Thank you for seeing this out so clearly. I've learned only to engage, if:
• I have some understanding of the topic
• it's something that I think is important
• I'm not tired, hungry, or stressed.
I follow the following strategy:
1. I always start off assuming the other person is keen to widen their understanding, and acting in good faith.
2. I always provide a link to a robust source to support any assertion I make.
3. As soon as I realise that bad faith is involved, I gently suggest that I've evidence all of my assertions, and the haven't for any of theirs (or point out flaws in their evidence).
4. If they persist, I just say 'We'll have to agree to disagree.' and leave it at that.
As you point out, this is a stage. I do the above for the benefit of any agnostic readers, to let them see why the other party was not correct—i.e. a 'teachable moment'.
How long I continue with 2. above depends entirely on how much energy & time I have available!