“☑ Fairness Toolkits, A Checkbox Culture?” On the Factors that Fragment Developer Practices in Handling Algorithmic Harms

Agathe Balayn, Mireia Yurrita, Jie Yang, Ujwal Gadiraju

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

22 Citations (SciVal)
96 Downloads (Pure)

Abstract

Fairness toolkits are developed to support machine learning (ML) practitioners in using algorithmic fairness metrics and mitigation methods. Past studies have investigated practical challenges for toolkit usage, which are crucial to understanding how to support practitioners. However, the extent to which fairness toolkits impact practitioners’ practices and enable reflexivity around algorithmic harms remains unclear (i.e., distributive unfairness beyond algorithmic fairness, and harms that are not related to the outputs of ML systems). Little is currently understood about the root factors that fragment practices when using fairness toolkits and how practitioners reflect on algorithmic harms. Yet, a deeper understanding of these facets is essential to enable the design of support tools for practitioners. To investigate the impact of toolkits on practices and identify factors that shape these practices, we carried out a qualitative study with 30 ML practitioners with varying backgrounds. Through a mixed within and between-subjects design, we tasked the practitioners with developing an ML model, and analyzed their reported practices to surface potential factors that lead to differences in practices. Interestingly, we found that fairness toolkits act as double-edge swords — with potentially positive and negative impacts on practices. Our findings showcase a plethora of human and organizational factors that play a key role in the way toolkits are envisioned and employed. These results bear implications for the design of future toolkits and educational training for practitioners and call for the creation of new policies to handle the organizational constraints faced by practitioners.
Original languageEnglish
Title of host publicationAIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
Subtitle of host publicationProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
Place of PublicationNew York, NY
PublisherAssociation for Computing Machinery (ACM)
Pages482–495
Number of pages14
ISBN (Electronic)9798400702310
ISBN (Print)979-8-4007-0231-0
DOIs
Publication statusPublished - 2023
Event2023 AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2023 - Montreal, Canada
Duration: 8 Aug 202310 Aug 2023

Publication series

NameAIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society

Conference

Conference2023 AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2023
Country/TerritoryCanada
CityMontreal
Period8/08/2310/08/23

Keywords

  • practices
  • organisational factors
  • human factors
  • fairness toolkits
  • algorithmic harms
  • algorithmic fairness

Fingerprint

Dive into the research topics of '“☑ Fairness Toolkits, A Checkbox Culture?” On the Factors that Fragment Developer Practices in Handling Algorithmic Harms'. Together they form a unique fingerprint.

Cite this