Abstract
Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, is broader in scope and/or more accurate is better. By applying these definitions and contrasting them with alternative definitions in the XAI literature I hope to help clarify what a good explanation is for AI.
Original language | English |
---|---|
Pages (from-to) | 563-584 |
Number of pages | 22 |
Journal | Minds and Machines |
Volume | 32 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2022 |
Keywords
- Counterfactuals
- Explainability
- Explainable AI
- Manipulationism