Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning

Daniel Marta*, Christian Pek, Gaspar I. Melsion, Jana Tumova, Iolanda Leite

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

6 Citations (Scopus)

Abstract

Despite the successes of deep reinforcement learning (RL), it is still challenging to obtain safe policies. Formal verification approaches ensure safety at all times, but usually overly restrict the agent's behaviors, since they assume adversarial behavior of the environment. Instead of assuming adversarial behavior, we suggest to focus on perceived safety instead, i.e., policies that avoid undesired behaviors while having a desired level of conservativeness. To obtain policies that are perceived as safe, we propose a shield synthesis framework with two distinct loops: (1) an inner loop that trains policies with a set of actions that is constrained by shields whose conservativeness is parameterized, and (2) an outer loop that presents example rollouts of the policy to humans and collects their feedback to update the parameters of the shields in the inner loop. We demonstrate our approach on a RL benchmark of Lunar landing and a scenario in which a mobile robot navigates around humans. For the latter, we conducted two user studies to obtain policies that were perceived as safe. Our results indicate that our framework converges to policies that are perceived as safe, is robust against noisy feedback, and can query feedback for multiple policies at the same time.

Original languageEnglish
Pages (from-to)406-413
JournalIEEE Robotics and Automation Letters
Volume7
Issue number1
DOIs
Publication statusPublished - 2022
Externally publishedYes

Keywords

  • human factors and human-in-the-loop
  • reinforcement learning
  • Safety in HRI

Fingerprint

Dive into the research topics of 'Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this