Towards a Framework for Certification of Reliable Autonomous Systems

Michael Fisher, Viviana Mascardi, Kristin Yvonne Rozier, Bernd-Holger Schlingloff, Michael Winikof, Neil Yorke-Smith

Research output: Contribution to journalArticleScientificpeer-review

36 Citations (Scopus)
606 Downloads (Pure)

Abstract

A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article.
Original languageEnglish
Article number8
Pages (from-to)1-65
Number of pages65
JournalAutonomous Agents and Multi-Agent Systems
Volume35
Issue number1
DOIs
Publication statusPublished - 2021

Keywords

  • Artificial intelligence
  • Autonomous systems
  • Certification
  • Verification

Fingerprint

Dive into the research topics of 'Towards a Framework for Certification of Reliable Autonomous Systems'. Together they form a unique fingerprint.

Cite this