Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK

Luca Nannini, Agathe Balayn, Adam Leon Smith

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

2 Citations (Scopus)
111 Downloads (Pure)

Abstract

Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we then conduct a gap analysis of existing policies, which leads us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.
Original languageEnglish
Title of host publicationProceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
PublisherAssociation for Computing Machinery (ACM)
Pages1198-1212
Number of pages15
ISBN (Electronic)978-1-4503-7252-7
DOIs
Publication statusPublished - 2023
Event6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, United States
Duration: 12 Jun 202315 Jun 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
Country/TerritoryUnited States
CityChicago
Period12/06/2315/06/23

Keywords

  • AI policy
  • Explainable AI
  • social epistemology

Fingerprint

Dive into the research topics of 'Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK'. Together they form a unique fingerprint.

Cite this