Abstract
A review of scoring rules highlights the distinction between rewarding honesty and rewarding quality. This motivates the introduction of a scale-invariant version of the Continuous Ranked Probability Score (CRPS) which enables statistical accuracy (SA) testing based on an exact rather than an asymptotic distribution of the density of convolutions. A recent data set of 6761 expert probabilistic forecasts for questions for which the actual values are known is used to compare performance. New insights include that (a) variance due to assessed variables dominates variance due to experts, (b) performance on mean absolute percentage error (MAPE) is weakly related to SA (c) scale-invariant CRPS combinations compete with the Classical Model (CM) on SA and MAPE, and (d) CRPS is more forgiving with regard to SA than the CM as CRPS is insensitive to location bias.
| Original language | English |
|---|---|
| Article number | e189 |
| Number of pages | 16 |
| Journal | Futures and Foresight Science |
| Volume | 6 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - 2024 |
Keywords
- Brier score
- Classical Model
- Continuous Ranked Probability Score
- expert judgment
- geometric probability
- location bias
- logarithmic score
- mean absolute percentage error
- overconfidence
- probability interval score
- scoring rules