Normative uncertainty and societal preferences: The problem with evaluative standards

Sietze Kai Kuilman*, Koji Andriamahery, Catholijn M. Jonker, Luciano Cavalcante Siebert

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

36 Downloads (Pure)

Abstract

Many technological systems these days interact with their environment with increasingly little human intervention. This situation comes with higher stakes and consequences that society needs to manage. No longer are we dealing with 404 pages: AI systems today may cause serious harm. To address this, we wish to exert a kind of control over these systems, so that they can adhere to our moral beliefs. However, given the plurality of values in our societies, which “oughts” ought these machines to adhere to? In this article, we examine Borda voting as a way to maximize expected choice-worthiness among individuals through different possible “implementations” of ethical principles. We use data from the Moral Machine experiment to illustrate the effectiveness of such a voting system. Although it appears to be effective on average, the maximization of expected choice-worthiness is heavily dependent on the formulation of principles. While Borda voting may be a good way of ensuring outcomes that are preferable to many, the larger problems in maximizing expected choice-worthiness, such as the capacity to formulate credences well, remain notoriously difficult; hence, we argue that such mechanisms should be implemented with caution and that other problems ought to be solved first.
Original languageEnglish
Number of pages8
JournalFrontiers in Neuroergonomics
Volume4
DOIs
Publication statusPublished - 2023

Keywords

  • normative uncertainty
  • limit of forms
  • ethics
  • preference profiles
  • moral machine
  • self-driving cars

Fingerprint

Dive into the research topics of 'Normative uncertainty and societal preferences: The problem with evaluative standards'. Together they form a unique fingerprint.

Cite this