The future of war: could lethal autonomous weapons make conflict more ethical?

Steven Umbrello*, Phil Torres, Angelo F. De Bellis

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)

Abstract

Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare.

Original languageEnglish
Pages (from-to)273-282
Number of pages10
JournalAI and Society
Volume35
Issue number1
DOIs
Publication statusPublished - 2020
Externally publishedYes

Keywords

  • Artificial intelligence
  • Ethics
  • Laws of war
  • Lethal autonomous weapons
  • Military robots

Fingerprint

Dive into the research topics of 'The future of war: could lethal autonomous weapons make conflict more ethical?'. Together they form a unique fingerprint.

Cite this