Interpretable confidence measures for decision support systems

Jasper van der Waa*, Tjeerd Schoonderwoerd, Jurriaan van Diggelen, Mark Neerincx

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

23 Citations (Scopus)
83 Downloads (Pure)


Decision support systems (DSS) have improved significantly but are more complex due to recent advances in Artificial Intelligence. Current XAI methods generate explanations on model behaviour to facilitate a user's understanding, which incites trust in the DSS. However, little focus has been on the development of methods that establish and convey a system's confidence in the advice that it provides. This paper presents a framework for Interpretable Confidence Measures (ICMs). We investigate what properties of a confidence measure are desirable and why, and how an ICM is interpreted by users. In several data sets and user experiments, we evaluate these ideas. The presented framework defines four properties: 1) accuracy or soundness, 2) transparency, 3) explainability and 4) predictability. These characteristics are realized by a case-based reasoning approach to confidence estimation. Example ICMs are proposed for -and evaluated on- multiple data sets. In addition, ICM was evaluated by performing two user experiments. The results show that ICM can be as accurate as other confidence measures, while behaving in a more predictable manner. Also, ICM's underlying idea of case-based reasoning enables generating explanations about the computation of the confidence value, and facilitates user's understandability of the algorithm.

Original languageEnglish
Article number102493
Pages (from-to)1-11
Number of pages11
JournalInternational Journal of Human Computer Studies
Publication statusPublished - 2020


  • Artificial intelligence
  • Confidence
  • Decision support systems
  • Explainable AI
  • Interpretable
  • Interpretable machine learning
  • Machine learning
  • Transparency
  • Trust calibration
  • User study


Dive into the research topics of 'Interpretable confidence measures for decision support systems'. Together they form a unique fingerprint.

Cite this