Killing by Autonomous Vehicles and the Legal Doctrine of Necessity

Filippo Santoni de Sio*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

58 Citations (Scopus)
143 Downloads (Pure)


How should autonomous vehicles (aka self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between causing different damages or losses to different objects or persons? This paper addresses this ethical question starting from the normative principles elaborated in the law to regulate difficult choices in other emergency scenarios. In particular, the paper offers a rational reconstruction of some major principles and norms embedded in the Anglo-American jurisprudence and case law on the “doctrine of necessity”; and assesses which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of autonomous vehicles in emergency situations. The paper covers the following topics: the distinction between “justification” and “excuse”, the legal prohibition of intentional killing outside self-defence, the incommensurability of goods, and the legal constrains to the use of lethal force set by normative positions: obligations, responsibility, rights, and authority. For each of these principles and constrains the possible application to the programming of autonomous vehicles is discussed. Based on the analysis, some practical suggestions are offered.

Original languageEnglish
Pages (from-to)411-429
Number of pages19
JournalEthical Theory and Moral Practice: an international forum
Issue number2
Publication statusPublished - 2017


  • Ethics of autonomous vehicles
  • Ethics of self-driving cars
  • Legal doctrine of necessity
  • Robot ethics
  • Trolley problem


Dive into the research topics of 'Killing by Autonomous Vehicles and the Legal Doctrine of Necessity'. Together they form a unique fingerprint.

Cite this