TY - GEN
T1 - Why and How Should We Explain AI?
AU - Buijsman, Stefan
N1 - Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
PY - 2023
Y1 - 2023
N2 - Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these arguments, showing that there is still room to think that explainability is required for responsible AI. With that in mind, the second part of the paper looks at how we might achieve this end goal. I proceed not from technical tools in explainability, but rather highlight accounts of explanation in philosophy that might inform what those technical tools should ultimately deliver. While there is disagreement here on what constitutes an explanation, the three accounts surveyed offer a good overview of the current theoretical landscape in philosophy and of what information might constitute an explanation. As such, they can hopefully inspire improvements to the technical explainability tools.
AB - Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these arguments, showing that there is still room to think that explainability is required for responsible AI. With that in mind, the second part of the paper looks at how we might achieve this end goal. I proceed not from technical tools in explainability, but rather highlight accounts of explanation in philosophy that might inform what those technical tools should ultimately deliver. While there is disagreement here on what constitutes an explanation, the three accounts surveyed offer a good overview of the current theoretical landscape in philosophy and of what information might constitute an explanation. As such, they can hopefully inspire improvements to the technical explainability tools.
KW - AI ethics
KW - Explainability
KW - Trust
UR - http://www.scopus.com/inward/record.url?scp=85152529561&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-24349-3_11
DO - 10.1007/978-3-031-24349-3_11
M3 - Conference contribution
AN - SCOPUS:85152529561
SN - 9783031243486
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 196
EP - 215
BT - Human-Centered Artificial Intelligence - Advanced Lectures
A2 - Chetouani, Mohamed
A2 - Dignum, Virginia
A2 - Lukowicz, Paul
A2 - Sierra, Carles
PB - Springer
T2 - 18th European Advanced Course on Artificial Intelligence, ACAI 2021
Y2 - 11 October 2021 through 15 October 2021
ER -