How to (hopefully) disagree about problems more constructively
Are we talking about the situation, the intervention or the policy?
One of the things I’ve noticed over the years is how debates about an emerging threat can often end up at cross-purposes. From climate change to pandemics, it’s easy to get quite far into a discussion before realising exactly how fragmented things are.
After all, an idea for how to tackle a problem won’t resonate much with someone who doesn’t believe there’s a problem in the first place. Conversely, a researcher who identifies an emerging problem won’t necessarily have a preferred set of policies for how countries should deal with it.
To make sense of these disagreements, I’ve found it useful to distinguish between three levels of response:
The situation. What is the current reality we’re dealing with? From a policy advice perspective, it might be the equivalent of saying: ‘There’s a problem, and this is what it looks like.’
The intervention. If there’s a problem, the obvious next step is to identify some potential solutions, and understand the implications of these options. So given a problem, it’s the equivalent of saying: ‘This is what we could do about the problem, and what the impact of those interventions might look like.’
The policy. After identifying the situation and options to tackle it, we have a distinct final step: the decision about what action to take (if any). After all, knowing there’s a problem and what we could do about it isn’t the same as deciding what we should do about it.
I’ve noticed that media outlets—particularly political journalists, as you might expect —are most interested in level 3. In extreme cases, they may not be that interested at all in level 1 (i.e. whether there’s actually a problem) or level 2 (i.e. whether the solutions are genuinely effective). Instead, they may prefer to highlight political arguments and personal attacks over what should be done next.
Science versus advocacy
While researching my latest book Proof, I was struck by a comment by Austin Bradford Hill, who played a key role in identifying the causal link between smoking and cancer. ‘It was no part of our job to tell the public how to behave with regard to smoking,’ he once wrote. He suggested his colleagues should be careful not to stray too far from the evidence, as doing so could dilute their credibility and blur the distinction between analysis and advocacy: ‘Our job was to ascertain the facts by research and publish them in medical journals. To become propagandists would ruin us as scientists and make us “biased” presenters of further material.’
Many researchers, I suspect, would disagree. If there’s evidence of a problem, they might argue, we should communicate how to counter that problem. But I think Bradford Hill hit on an important point: whenever there is the temptation to veer from saying we could do something to stating should do something, there is the risk of reaching beyond science into personal values and politics.
Sometimes, the blurring works the other way, with policy criticisms becoming scientific criticisms. Take vaccine mandates. There are a lot of moral and ethical aspects to such policies, and one person might in good faith oppose mandates even if they agree with others about the quantitative risk of the disease and the evidence base around the impact of vaccines and mandates.
But advocates for—or against—certain policies will sometimes move down the levels to try and undermine opposing positions. In these cases the argument will go deeper, and stray away from available evidence. Someone might not just argue that mandates are the wrong policy; they might claim that the underlying vaccines don’t work. Or they might go right back to the first level, and claim that the disease isn’t actually a problem in the first place.
This distinction matters, because if people agree on the first two levels, disagreements will generally boil down to differing values, morals or politics around the policy. Strictly speaking, it remains an evidence-informed disagreement, even when the underlying tension stems from differing political or ethical perspectives.
In contrast, if someone doesn’t believe there’s a problem at all, or thinks that an intervention won’t have any effect when the best evidence suggests it will, any discussion about policy options will have no foundation to build on. If we want to make progress, we need to recognise which level we’re at, where the other person is, and what’s caused us to end up in different places.
If you’re interested in reading more about evidence and its limits, you might like my new book Proof: The Uncertain Science of Certainty, which is available now.
Cover image: Just Snapshot via Unsplash
This is quite relevant to my field of study and interest, autism. The position of many in both science and the press is stuck at the situation: the claim that there isn’t even a problem so we don’t need to talk about solutions and policies.
But the problem with that is that there really is a problem. Both at the individual level for many people with the disorder, and at the population level, where the evidence consistently shows a long-running rapid increase in case incidence.
Yes, I realize that some people really don’t like to hear about this. And some would say that I am wrong here. But so far nobody has been able to produce valid evidence to that effect. What passes for evidence is usually opinion or citations of opinions. Some sources look like evidence but it falls apart on close inspection.
Adam, thank you for this systematic and clear approach on problem solving and thanks further on this Sir Bradford Hill quote "Our job was to ascertain the facts by research and publish them in medical journals. To become propagandists would ruin us as scientists and make us “biased” presenters of further material." I think it is wise for us researcher to adhere to.