Calibrating experts’ probabilistic assessments for improved probabilistic predictions

A.M. Hanea*, G.F. Nane

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

13 Citations (Scopus)
33 Downloads (Pure)

Abstract

Expert judgement is routinely required to inform critically important decisions. While expert judgement can be remarkably useful when data are absent, it can be easily influenced by contextual biases which can lead to poor judgements and subsequently poor decisions. Structured elicitation protocols aim to: (1) guard against biases and provide better (aggregated) judgements, and (2) subject expert judgements to the same level of scrutiny as is expected for empirical data. The latter ensures that if judgements are to be used as data, they are subject to the scientific principles of review, critical appraisal, and repeatability. Objectively evaluating the quality of expert data and validating expert judgements are other essential elements. Considerable research suggests that the performance of experts should be evaluated by scoring experts on questions related to the elicitation questions, whose answers are known a priori. Experts who can provide accurate, well-calibrated and informative judgements should receive more weight in a final aggregation of judgements. This is referred to as performance-weighting in the mathematical aggregation of multiple judgements. The weights depend on the chosen measures of performance. We are yet to understand the best methods to aggregate judgements, how well such aggregations perform out of sample, or the costs involved, as well as the benefits of the various approaches. In this paper we propose and explore a new measure of experts’ calibration. A sizeable data set containing predictions for outcomes of geopolitical events is used to investigate the properties of this calibration measure when compared to other, well established measures.

Original languageEnglish
Pages (from-to)763-771
Number of pages9
JournalSafety Science
Volume118
DOIs
Publication statusPublished - 2019

Bibliographical note

Accepted Author Manuscript

Keywords

  • Calibration
  • Performance based weighting
  • Probabilistic predictions
  • Structured expert judgement

Fingerprint

Dive into the research topics of 'Calibrating experts’ probabilistic assessments for improved probabilistic predictions'. Together they form a unique fingerprint.

Cite this