Spotting When Algorithms Are Wrong

Stefan Buijsman*, Herman Veluwenkamp

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)
91 Downloads (Pure)

Abstract

Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency.
Original languageEnglish
Pages (from-to)541-562
Number of pages22
JournalMinds and Machines
Volume33
Issue number4
DOIs
Publication statusPublished - 2022

Keywords

  • Defeaters
  • Epistemic dependence
  • Oversight
  • Sociotechnical systems

Fingerprint

Dive into the research topics of 'Spotting When Algorithms Are Wrong'. Together they form a unique fingerprint.

Cite this