Expert forecasting with and without uncertainty quantification and weighting: What do the data say?

Roger M. Cooke*, Deniz Marti, Thomas Mazzuchi

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

16 Citations (Scopus)
54 Downloads (Pure)

Abstract

Post-2006 expert judgment data has been extended to 530 experts assessing 580 calibration variables from their fields. New analysis shows that point predictions as medians of combined expert distributions outperform combined medians, and medians of performance weighted combinations outperform medians of equal weighted combinations. Relative to the equal weight combination of medians, using the medians of performance weighted combinations yields a 65% improvement. Using the medians of equally weighted combinations yields a 46% improvement. The Random Expert Hypothesis underlying all performance-blind combination schemes, namely that differences in expert performance reflect random stressors and not persistent properties of the experts, is tested by randomly scrambling expert panels. Generating distributions for a full set of performance metrics, the hypotheses that the original panels’ performance measures are drawn from distributions produced by random scrambling are rejected at significance levels ranging from E−6 to E−12. Random stressors cannot produce the variations in performance seen in the original panels. In- and out-of-sample validation results are updated.

Original languageEnglish
Pages (from-to)378-387
Number of pages10
JournalInternational Journal of Forecasting
Volume37
Issue number1
DOIs
Publication statusPublished - 2020

Keywords

  • Calibration
  • Combining forecasts
  • Evaluating forecasts
  • Judgmental forecasting
  • Panel data
  • Simulation

Fingerprint

Dive into the research topics of 'Expert forecasting with and without uncertainty quantification and weighting: What do the data say?'. Together they form a unique fingerprint.

Cite this