Scoring rules and performance, new analysis of expert judgment data

Gabriela F. Nane*, Roger M. Cooke

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

82 Downloads (Pure)

Abstract

A review of scoring rules highlights the distinction between rewarding honesty and rewarding quality. This motivates the introduction of a scale-invariant version of the Continuous Ranked Probability Score (CRPS) which enables statistical accuracy (SA) testing based on an exact rather than an asymptotic distribution of the density of convolutions. A recent data set of 6761 expert probabilistic forecasts for questions for which the actual values are known is used to compare performance. New insights include that (a) variance due to assessed variables dominates variance due to experts, (b) performance on mean absolute percentage error (MAPE) is weakly related to SA (c) scale-invariant CRPS combinations compete with the Classical Model (CM) on SA and MAPE, and (d) CRPS is more forgiving with regard to SA than the CM as CRPS is insensitive to location bias.
Original languageEnglish
Article numbere189
Number of pages16
JournalFutures and Foresight Science
Volume6
Issue number4
DOIs
Publication statusPublished - 2024

Keywords

  • Brier score
  • Classical Model
  • Continuous Ranked Probability Score
  • expert judgment
  • geometric probability
  • location bias
  • logarithmic score
  • mean absolute percentage error
  • overconfidence
  • probability interval score
  • scoring rules

Fingerprint

Dive into the research topics of 'Scoring rules and performance, new analysis of expert judgment data'. Together they form a unique fingerprint.

Cite this