notes-forecasting

This journalist speculates that people ignore reported probabilities and "round" them to a binary prediction.

Namely, that when people are told that, eg, the odds of Trump winning the election are 1 in 3, they ignore the probability and interpret that as someone saying "I predict that Trump will lose". Then if Trump wins, the reader interprets that as "The person who made that prediction is wrong", and think that the predictor must have used an incorrect procedure to generate the prediction and/or that they are a bad person to listen to in the future.

The article touches on three potential remedies:

1) Assume that most people are too stupid to handle numbers. Never tell most people about probabilities, only give them narratives

or:

2) Give people the probabilities but accompany it with a picture that shows a collection of little icons for each possibility, repeated in proportion to its probability (eg show 100 little icons, some of them Trumps and some of them Hillaries)

and/or:

3) Accompany the reporting of the predicted probability with a narrative about the LESS likely scenario, including what could cause it to occur and what the outcomes would be, in order to try to get the reader to take it seriously.

https://mobile.nytimes.com/2017/12/24/opinion/2017-wrong-numbers.html

P.S. incidentally, when you hear someone give a wrong prediction, of course it should decrease your confidence in their future predictions, but otoh even good predictors will be wrong sometimes, so you need multiple predictions to get much information on confidence. When you have such a dataset, how should you score predictors? One way is the Brier score:

https://en.wikipedia.org/wiki/Brier_score

which is just the average of:

(predicted_probability - binary_event_outcome)^2

over all of the observed events (where binary_event_outcome is either 0 or 1).

---