TY - JOUR
T1 - The perils and pitfalls of explainable AI
T2 - Strategies for explaining algorithmic decision-making
AU - de Bruijn, Hans
AU - Warnier, Martijn
AU - Janssen, Marijn
PY - 2022
Y1 - 2022
N2 - Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms of the opaqueness of algorithmic decision-making with AI. Although XAI is appealing as a solution for automated decisions, the wicked nature of the challenges governments face complicates the use of XAI. Wickedness means that the facts that define a problem are ambiguous and that there is no consensus on the normative criteria for solving this problem. In such a situation, the use of algorithms can result in distrust. Whereas there is much research advancing XAI technology, the focus of this paper is on strategies for explainability. Three illustrative cases are used to show that explainable, data-driven decisions are often not perceived as objective by the public. The context might raise strong incentives to contest and distrust the explanation of AI, and as a consequence, fierce resistance from society is encountered. To overcome the inherent problems of XAI, decisions-specific strategies are proposed to lead to societal acceptance of AI-based decisions. We suggest strategies to embrace explainable decisions and processes, co-create decisions with societal actors, move away from an instrumental to an institutional approach, use competing and value-sensitive algorithms, and mobilize the tacit knowledge of professionals
AB - Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms of the opaqueness of algorithmic decision-making with AI. Although XAI is appealing as a solution for automated decisions, the wicked nature of the challenges governments face complicates the use of XAI. Wickedness means that the facts that define a problem are ambiguous and that there is no consensus on the normative criteria for solving this problem. In such a situation, the use of algorithms can result in distrust. Whereas there is much research advancing XAI technology, the focus of this paper is on strategies for explainability. Three illustrative cases are used to show that explainable, data-driven decisions are often not perceived as objective by the public. The context might raise strong incentives to contest and distrust the explanation of AI, and as a consequence, fierce resistance from society is encountered. To overcome the inherent problems of XAI, decisions-specific strategies are proposed to lead to societal acceptance of AI-based decisions. We suggest strategies to embrace explainable decisions and processes, co-create decisions with societal actors, move away from an instrumental to an institutional approach, use competing and value-sensitive algorithms, and mobilize the tacit knowledge of professionals
KW - Accountability
KW - Algorithms
KW - Artificial intelligence
KW - Computational intelligence
KW - Data-driven decision
KW - E-government
KW - Socio-tech
KW - Transparency
KW - Trust
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85121995316&partnerID=8YFLogxK
U2 - 10.1016/j.giq.2021.101666
DO - 10.1016/j.giq.2021.101666
M3 - Article
AN - SCOPUS:85121995316
SN - 0740-624X
VL - 39
JO - Government Information Quarterly
JF - Government Information Quarterly
IS - 2
M1 - 101666
ER -