Abstract
We discuss the impact of presenting explanations to people for Arti_cial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining arti_cial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques.We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the e_ectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations.
Original language | English |
---|---|
Title of host publication | XAI.it workshop |
Number of pages | 8 |
Volume | CEUR.ws (volume 2742) |
Publication status | Published - 2020 |
Externally published | Yes |
Event | Italian Workshop on Explainable Artificial Intelligence 2020: XAI.it 2020 - Virtual/online event due to COVID-19 Duration: 18 Nov 2020 → 26 Nov 2020 |
Conference
Conference | Italian Workshop on Explainable Artificial Intelligence 2020 |
---|---|
Period | 18/11/20 → 26/11/20 |