Considerations for Applying Logical Reasoning to Explain Neural Network Outputs

Federico Maria Cau, Lucio Davide Spano, Nava Tintarev

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

We discuss the impact of presenting explanations to people for Arti_cial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining arti_cial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques.We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the e_ectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations.
Original languageEnglish
Title of host publicationXAI.it workshop
Number of pages8
VolumeCEUR.ws (volume 2742)
Publication statusPublished - 2020
Externally publishedYes
EventItalian Workshop on Explainable Artificial Intelligence 2020: XAI.it 2020 - Virtual/online event due to COVID-19
Duration: 18 Nov 202026 Nov 2020

Conference

ConferenceItalian Workshop on Explainable Artificial Intelligence 2020
Period18/11/2026/11/20

Fingerprint Dive into the research topics of 'Considerations for Applying Logical Reasoning to Explain Neural Network Outputs'. Together they form a unique fingerprint.

Cite this