AlwaysSafe: Reinforcement Learning without Safety Constraint Violations during Training

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

353 Downloads (Pure)

Abstract

Deploying reinforcement learning (RL) involves major concerns around safety. Engineering a reward signal that allows the agent to maximize its performance while remaining safe is not trivial. Safe RL studies how to mitigate such problems. For instance, we can decouple safety from reward using constrained Markov decision processes (CMDPs), where an independent signal models the safety aspects. In this setting, an RL agent can autonomously find tradeoffs between performance and safety. Unfortunately, most RL agents designed for CMDPs only guarantee safety after the learning phase, which might prevent their direct deployment. In this work, we investigate settings where a concise abstract model of the safety aspects is given, a reasonable assumption since a thorough understanding of safety-related matters is a prerequisite for deploying RL in typical applications. Factored CMDPs provide such compact models when a small subset of features describe the dynamics relevant for the safety constraints. We propose an RL algorithm that uses this abstract model to learn policies for CMDPs safely, that is without violating the constraints. During the training process, this algorithm can seamlessly switch from a conservative policy to a greedy policy without violating the safety constraints. We prove that this algorithm is safe under the given assumptions. Empirically, we show that even if safety and reward signals are contradictory, this algorithm always operates safely and, when they are aligned, this approach also improves the agent's performance.
Original languageEnglish
Title of host publicationProceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems
Place of PublicationRichland, SC
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems
Pages1226-1235
Number of pages10
ISBN (Electronic)9781450383073
Publication statusPublished - 2021
Event20th International Conference on Autonomous Agentsand Multiagent Systems - Virtual/online event due to COVID-19
Duration: 3 May 20217 May 2021
Conference number: 20

Publication series

NameAAMAS '21
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems
ISSN (Electronic)2523-5699

Conference

Conference20th International Conference on Autonomous Agentsand Multiagent Systems
Abbreviated titleAAMAS 2021
Period3/05/217/05/21

Fingerprint

Dive into the research topics of 'AlwaysSafe: Reinforcement Learning without Safety Constraint Violations during Training'. Together they form a unique fingerprint.

Cite this