Great post, I agree that using weasel words to dodge responsibility is problematic; but I’m also concerned that trying to use a 0–100% "probabilistic yardstick" for standardization introduces new problems. It creates the illusion of a neat, "fair-dice" world, whereas in reality, judgments or estimates of the unknowable can't meaningfully be assigned percentages—even within an interval.
So perhaps the use of ambiguous words, viewed from a more charitable lens, might actually be a feature that allows for reinterpretation?
I'd be interested to hear what you think.
P.S. This post is largely inspired by Taleb’s The Ludic Fallacy.
It also depends a bit on what the estimates are being used for – thinking of Tetlock et al's 'Goldilock's zone' where estimates between 20-80% can be evaluated for accuracy (and for calibration, if a range is given). But as you say, lower probability events may simply not be suitable for probability-based analysis – a point also made by Keynes almost a century ago ('we simply do not know')
Enjoyed the historical example here. When I teach intro game theory we always have a discussion around this but then brush it aside bc we require specific probabilities (or distributions) to be assigned to beliefs, actions, and strategies. Of course that doesn't mean the problem doesn't creep up in the real world outside of the white board.
Likelihood of cause comes up all the time in medicine, especially in medicolegal (forensic) work. Thank you for such a lucid treatment. I’ll be using it (duly credited)!
As a lawyer and policy analyst that permanently deals with value judgements I really liked this one. Thank you!
This post sold you a book. Thanks for both.
Great post, I agree that using weasel words to dodge responsibility is problematic; but I’m also concerned that trying to use a 0–100% "probabilistic yardstick" for standardization introduces new problems. It creates the illusion of a neat, "fair-dice" world, whereas in reality, judgments or estimates of the unknowable can't meaningfully be assigned percentages—even within an interval.
So perhaps the use of ambiguous words, viewed from a more charitable lens, might actually be a feature that allows for reinterpretation?
I'd be interested to hear what you think.
P.S. This post is largely inspired by Taleb’s The Ludic Fallacy.
It also depends a bit on what the estimates are being used for – thinking of Tetlock et al's 'Goldilock's zone' where estimates between 20-80% can be evaluated for accuracy (and for calibration, if a range is given). But as you say, lower probability events may simply not be suitable for probability-based analysis – a point also made by Keynes almost a century ago ('we simply do not know')
Love this (and came upon it via the excellent daily news aggregator The Browser).
Last year I wrote about my own practice of using a "scale of 1 to 10" metric. I also use "non-zero chance".
https://thejadedcynic.substack.com/p/scale-of-1-to-10
Medicine has many. “Not infrequently”, “not unusual”, “Let’s see what happens”, “non-accidental injury”.
"We believe" and "We doubt" have the most hilariously broad distributions with little bump at the leading and trailing end.
Enjoyed the historical example here. When I teach intro game theory we always have a discussion around this but then brush it aside bc we require specific probabilities (or distributions) to be assigned to beliefs, actions, and strategies. Of course that doesn't mean the problem doesn't creep up in the real world outside of the white board.
Likelihood of cause comes up all the time in medicine, especially in medicolegal (forensic) work. Thank you for such a lucid treatment. I’ll be using it (duly credited)!
I think we should rather educate people to use numbers. It isn't harder than words. Just requires a little effort to get used to it.