WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

71 Downloads (Pure)

Abstract

Safe exploration is regarded as a key priority area for reinforcement learning research. With separate reward and safety signals, it is natural to cast it as constrained reinforcement learning, where expected long-term costs of policies are constrained. However, it can be hazardous to set constraints on the expected safety signal without considering the tail of the distribution. For instance, in safety-critical domains, worst-case analysis is required to avoid disastrous results. We present a novel reinforcement learning algorithm called Worst-Case Soft Actor Critic, which extends the Soft Actor Critic algorithm with a safety critic to achieve risk control. More specifically, a certain level of conditional Value-at- Risk from the distribution is regarded as a safety measure to judge the constraint satisfaction, which guides the change of adaptive safety weights to achieve a trade-off between reward and safety. As a result, we can optimize policies under the premise that their worst-case performance satisfies the constraints. The empirical analysis shows that our algorithm attains better risk control compared to expectation-based methods.
Original languageEnglish
Title of host publicationProceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21)
Pages10639-10646
Number of pages8
Publication statusPublished - 2021

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Reinforcement Learning

Fingerprint

Dive into the research topics of 'WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning'. Together they form a unique fingerprint.

Cite this