Baylime: Bayesian local interpretable model-agnostic explanations

Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

46 Downloads (Pure)

Abstract

Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI – which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and Grad- CAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
Original languageEnglish
Title of host publicationUncertainty in Artificial Intelligence, 27-30 July 2021, Online
EditorsCassio de Campos, Marloes H. Maathuis
Pages887-896
Volume161
Publication statusPublished - 2021
Event37th International Conference on Uncertainty in Artificial Intelligence -
Duration: 26 Jul 202130 Jul 2021

Publication series

NameProceedings of Machine Learning Research
Volume161
ISSN (Electronic)2640-3498

Conference

Conference37th International Conference on Uncertainty in Artificial Intelligence
Period26/07/2130/07/21

Fingerprint

Dive into the research topics of 'Baylime: Bayesian local interpretable model-agnostic explanations'. Together they form a unique fingerprint.

Cite this