Refined Risk Management in Safe Reinforcement Learning with a Distributional Safety Critic

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

81 Downloads (Pure)


Safety is critical to broadening the real-world use of reinforcement learning (RL). Modeling the safety aspects using a safety-cost signal separate from the reward is becoming standard practice, since it avoids the problem of finding a good balance between safety and performance. However, the total safety-cost distribution of different trajectories is still largely unexplored. In this paper, we propose an actor critic method for safe RL that uses an implicit quantile network to approximate the distribution of accumulated safety-costs. Using an accurate estimate of the distribution of accumulated safetycosts, in particular of the upper tail of the distribution, greatly improves the performance of riskaverse RL agents. The empirical analysis shows that our method achieves good risk control in complex safety-constrained environments.
Original languageEnglish
Title of host publicationSafe RL Workshop at IJCAI 2022
EditorsDavid Bossens, Stephen Giguere, Roderick Bloem, Bettina Koenighofer
Number of pages4
Publication statusPublished - 2022
EventInternational Workshop on Safe Reinforcement Learning - Vienna, Austria
Duration: 23 Jul 202223 Jul 2022
Conference number: 1


WorkshopInternational Workshop on Safe Reinforcement Learning
Abbreviated titleSafe RL workshop


Dive into the research topics of 'Refined Risk Management in Safe Reinforcement Learning with a Distributional Safety Critic'. Together they form a unique fingerprint.

Cite this