Bias in Automated Speaker Recognition

Wiebke Toussaint Hutiri*, Aaron Yi Ding

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

17 Citations (Scopus)
118 Downloads (Pure)

Abstract

Automated speaker recognition uses data processing to identify speakers by their voice. Today, automated speaker recognition is deployed on billions of smart devices and in services such as call centres. Despite their wide-scale deployment and known sources of bias in related domains like face recognition and natural language processing, bias in automated speaker recognition has not been studied systematically. We present an in-depth empirical and analytical study of bias in the machine learning development workflow of speaker verification, a voice biometric and core task in automated speaker recognition. Drawing on an established framework for understanding sources of harm in machine learning, we show that bias exists at every development stage in the well-known VoxCeleb Speaker Recognition Challenge, including data generation, model building, and implementation. Most affected are female speakers and non-US nationalities, who experience significant performance degradation. Leveraging the insights from our findings, we make practical recommendations for mitigating bias in automated speaker recognition, and outline future research directions.
Original languageEnglish
Title of host publicationProceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
PublisherAssociation for Computing Machinery (ACM)
Pages230-247
Number of pages18
ISBN (Electronic)978-1-4503-9352-2
DOIs
Publication statusPublished - 2022
Event5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Korea, Republic of
Duration: 21 Jun 202224 Jun 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period21/06/2224/06/22

Keywords

  • audit
  • bias
  • evaluation
  • fairness
  • speaker recognition
  • speaker verification

Fingerprint

Dive into the research topics of 'Bias in Automated Speaker Recognition'. Together they form a unique fingerprint.

Cite this