Explaining Black-Box Models through Counterfactuals

P. Altmeyer*, C.C.S. Liem, A. van Deursen

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

133 Downloads (Pure)

Abstract

We present CounterfactualExplanations.jl: a package for generating Counterfactual Explanations (CE) and Algorithmic Recourse (AR) for black-box models in Julia. CE explain how inputs into a model need to change to yield specific model predictions. Explanations that involve realistic and actionable changes can be used to provide AR: a set of proposed actions for individuals to change an undesirable outcome for the better. In this article, we discuss the usefulness of CE for Explainable Artificial Intelligence and demonstrate the functionality of our package. The package is straightforward to use and designed with a focus on customization and extensibility. We envision it to one day be the go-to place for explaining arbitrary predictive models in Julia through a diverse suite of counterfactual generators.
Original languageEnglish
Title of host publicationThe Proceedings of the JuliaCon Conferences (JCON)
Number of pages10
DOIs
Publication statusPublished - 2023

Keywords

  • Julia
  • Explainable AI
  • Counterfactual Explanations
  • Algorithmic Recourse

Fingerprint

Dive into the research topics of 'Explaining Black-Box Models through Counterfactuals'. Together they form a unique fingerprint.

Cite this