The Design of Human Oversight in Autonomous Weapon Systems

Ilse Verdiesen*

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

2 Citations (Scopus)
135 Downloads (Pure)

Abstract

As the reach and capabilities of Artificial Intelligence (AI) systems increases, there is also a growing awareness of the ethical, legal and societal impact of the potential actions and decisions of these systems. Many are calling for guidelines and regulations that can ensure the responsible design, development, implementation, and policy of AI. In scientific literature, AI is characterized by the concepts of Adaptability, Interactivity and Autonomy (Floridi & Sanders, 2004). According to Floridi and Sanders (2004), Adaptability means that the system can change based on its interaction and can learn from its experience. Machine learning techniques are an example of this. Interactivity occurs when the system and its environment act upon each other and Autonomy implies that the system itself can change its state.

Original languageEnglish
Title of host publicationAIES 2018 - Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society
EditorsV. Conitzer, S. Kambhampati, S. Koenig, F. Rossi, B. Schnabel
PublisherAssociation for Computing Machinery (ACM)
Pages388-389
Number of pages2
ISBN (Electronic)978-145036012-8
DOIs
Publication statusPublished - 2018
Event1st AAAI/ACM Conference on AI, Ethics, and Society, AIES 2018 - New Orleans, United States
Duration: 2 Feb 20183 Feb 2018

Conference

Conference1st AAAI/ACM Conference on AI, Ethics, and Society, AIES 2018
Country/TerritoryUnited States
CityNew Orleans
Period2/02/183/02/18

Keywords

  • autonomous weapons systems
  • ethical decision-making
  • human oversight
  • moral judgement

Fingerprint

Dive into the research topics of 'The Design of Human Oversight in Autonomous Weapon Systems'. Together they form a unique fingerprint.

Cite this