Relation between prognostics predictor evaluation metrics and local interpretability SHAP values

Marcia L. Baptista*, Kai Goebel, Elsa M.P. Henriques

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

50 Citations (Scopus)
134 Downloads (Pure)

Abstract

Maintenance decisions in domains such as aeronautics are becoming increasingly dependent on being able to predict the failure of components and systems. When data-driven techniques are used for this prognostic task, they often face headwinds due to their perceived lack of interpretability. To address this issue, this paper examines how features used in a data-driven prognostic approach correlate with established metrics of monotonicity, trendability, and prognosability. In particular, we use the SHAP model (SHapley Additive exPlanations) from the field of eXplainable Artificial Intelligence (XAI) to analyze the outcome of three increasingly complex algorithms: Linear Regression, Multi-Layer Perceptron, and Echo State Network. Our goal is to test the hypothesis that the prognostics metrics correlate with the SHAP model's explanations, i.e., the SHAP values. We use baseline data from a standard data set that contains several hundred run-to-failure trajectories for jet engines. The results indicate that SHAP values track very closely with these metrics with differences observed between the models that support the assertion that model complexity is a significant factor to consider when explainability is a consideration in prognostics.

Original languageEnglish
Article number103667
JournalArtificial Intelligence
Volume306
DOIs
Publication statusPublished - 2022

Keywords

  • Local interpretability
  • Model-agnostic interpretability
  • Monotonicity
  • Prognosability
  • SHAP values
  • Trendability

Fingerprint

Dive into the research topics of 'Relation between prognostics predictor evaluation metrics and local interpretability SHAP values'. Together they form a unique fingerprint.

Cite this