Abstract
Without an assigned task, a suitable intrinsic objective for an agent is to explore the environment efficiently. However, the pursuit of exploration will inevitably bring more safety risks.
An under-explored aspect of reinforcement learning is how to achieve safe efficient exploration when the task is unknown.
In this paper, we propose a practical Constrained Entropy Maximization (CEM) algorithm to solve task-agnostic safe exploration problems, which naturally require a finite horizon and undiscounted constraints on safety costs.
The CEM algorithm aims to learn a policy that maximizes the state entropy under the premise of safety.
To avoid approximating the state density in complex domains, CEM leverages a $k$-nearest neighbor entropy estimator to evaluate the efficiency of exploration.
In terms of safety, CEM minimizes the safety costs, and adaptively trades off safety and exploration based on the current constraint satisfaction. We empirically show that CEM allows learning a safe exploration policy in complex continuous-control domains, and the learned policy benefits downstream tasks in safety and sample efficiency.
An under-explored aspect of reinforcement learning is how to achieve safe efficient exploration when the task is unknown.
In this paper, we propose a practical Constrained Entropy Maximization (CEM) algorithm to solve task-agnostic safe exploration problems, which naturally require a finite horizon and undiscounted constraints on safety costs.
The CEM algorithm aims to learn a policy that maximizes the state entropy under the premise of safety.
To avoid approximating the state density in complex domains, CEM leverages a $k$-nearest neighbor entropy estimator to evaluate the efficiency of exploration.
In terms of safety, CEM minimizes the safety costs, and adaptively trades off safety and exploration based on the current constraint satisfaction. We empirically show that CEM allows learning a safe exploration policy in complex continuous-control domains, and the learned policy benefits downstream tasks in safety and sample efficiency.
Original language | English |
---|---|
Title of host publication | The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23) |
Number of pages | 9 |
Publication status | Published - 2023 |
Event | 37th AAAI Conference on Artificial Intelligence - Washington, United States Duration: 7 Feb 2023 → 14 Feb 2023 Conference number: 37 |
Conference
Conference | 37th AAAI Conference on Artificial Intelligence |
---|---|
Abbreviated title | AAAI-23 |
Country/Territory | United States |
City | Washington |
Period | 7/02/23 → 14/02/23 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-careOtherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Keywords
- Reinforcement Learning
- Safe Exploration