Spotting When Algorithms Are Wrong

Stefan Buijsman*, Herman Veluwenkamp

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
35 Downloads (Pure)


Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency.
Original languageEnglish
Number of pages22
JournalMinds and Machines
Publication statusPublished - 2022


  • Defeaters
  • Epistemic dependence
  • Oversight
  • Sociotechnical systems


Dive into the research topics of 'Spotting When Algorithms Are Wrong'. Together they form a unique fingerprint.

Cite this