The risks of autonomous machines: from responsibility gaps to control gaps

Frank Hindriks, H.M. Veluwenkamp

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)
135 Downloads (Pure)

Abstract

Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The real problem lies elsewhere: autonomous machines should be built so as to exhibit a level of risk that is morally acceptable. If they fall short of this standard, they exhibit what we call ‘a control gap.’ The causal control that autonomous machines have will then fall short of the guidance control they should emulate.

Original languageEnglish
Article number21
JournalSynthese: an international journal for epistemology, methodology and philosophy of science
Volume201
Issue number1
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'The risks of autonomous machines: from responsibility gaps to control gaps'. Together they form a unique fingerprint.

Cite this