Abstract
Learning effective policies for real-world problems is still an open challenge for the field of reinforcement learning (RL). The main limitation being the amount of data needed and the pace at which that data can be obtained. In this paper, we study how to build lightweight simulators of complicated systems that can run sufficiently fast for deep RL to be applicable. We focus on domains where agents interact with a reduced portion of a larger environment while still being affected by the global dynamics. Our method combines the use of local simulators with learned models that mimic the influence of the global system. The experiments reveal that incorporating this idea into the deep RL workflow can considerably accelerate the training process and presents several opportunities for the future.
Original language | English |
---|---|
Title of host publication | Proceedings of the 39th International Conference on Machine Learning |
Editors | K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato |
Publisher | PMLR |
Pages | 20604-20624 |
Number of pages | 21 |
Volume | 162 |
Publication status | Published - 2022 |
Event | The 39th International Conference on Machine Learning - Baltimore, United States Duration: 17 Jul 2022 → 23 Jul 2022 Conference number: 39th |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 162 |
ISSN (Print) | 2640-3498 |
Conference
Conference | The 39th International Conference on Machine Learning |
---|---|
Abbreviated title | ICML 2022 |
Country/Territory | United States |
City | Baltimore |
Period | 17/07/22 → 23/07/22 |
Keywords
- reinforcement learning (RL)
- simulation and control.