TY - JOUR
T1 - Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
AU - Baptista, Marcia L.
AU - Goebel, Kai
AU - Henriques, Elsa M.P.
PY - 2022
Y1 - 2022
N2 - Maintenance decisions in domains such as aeronautics are becoming increasingly dependent on being able to predict the failure of components and systems. When data-driven techniques are used for this prognostic task, they often face headwinds due to their perceived lack of interpretability. To address this issue, this paper examines how features used in a data-driven prognostic approach correlate with established metrics of monotonicity, trendability, and prognosability. In particular, we use the SHAP model (SHapley Additive exPlanations) from the field of eXplainable Artificial Intelligence (XAI) to analyze the outcome of three increasingly complex algorithms: Linear Regression, Multi-Layer Perceptron, and Echo State Network. Our goal is to test the hypothesis that the prognostics metrics correlate with the SHAP model's explanations, i.e., the SHAP values. We use baseline data from a standard data set that contains several hundred run-to-failure trajectories for jet engines. The results indicate that SHAP values track very closely with these metrics with differences observed between the models that support the assertion that model complexity is a significant factor to consider when explainability is a consideration in prognostics.
AB - Maintenance decisions in domains such as aeronautics are becoming increasingly dependent on being able to predict the failure of components and systems. When data-driven techniques are used for this prognostic task, they often face headwinds due to their perceived lack of interpretability. To address this issue, this paper examines how features used in a data-driven prognostic approach correlate with established metrics of monotonicity, trendability, and prognosability. In particular, we use the SHAP model (SHapley Additive exPlanations) from the field of eXplainable Artificial Intelligence (XAI) to analyze the outcome of three increasingly complex algorithms: Linear Regression, Multi-Layer Perceptron, and Echo State Network. Our goal is to test the hypothesis that the prognostics metrics correlate with the SHAP model's explanations, i.e., the SHAP values. We use baseline data from a standard data set that contains several hundred run-to-failure trajectories for jet engines. The results indicate that SHAP values track very closely with these metrics with differences observed between the models that support the assertion that model complexity is a significant factor to consider when explainability is a consideration in prognostics.
KW - Local interpretability
KW - Model-agnostic interpretability
KW - Monotonicity
KW - Prognosability
KW - SHAP values
KW - Trendability
UR - http://www.scopus.com/inward/record.url?scp=85125490746&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2022.103667
DO - 10.1016/j.artint.2022.103667
M3 - Article
AN - SCOPUS:85125490746
SN - 0004-3702
VL - 306
JO - Artificial Intelligence
JF - Artificial Intelligence
M1 - 103667
ER -