Abstract
Recent years have seen a growing interest in the use of deep neural networks as function approximators in reinforcement learning. In this paper, an experience replay method is proposed that ensures that the distribution of the experiences used for training is between that of the policy and a uniform distribution. Through experiments on a magnetic manipulation task it is shown that the method reduces the need for sustained exhaustive exploration during learning. This makes it attractive in scenarios where sustained exploration is in-feasible or undesirable, such as for physical systems like robots and for life long learning. The method is also shown to improve the generalization performance of the trained policy, which can make it attractive for transfer learning. Finally, for small experience databases the method performs favorably when compared to the recently proposed alternative of using the temporal difference error to determine the experience sample distribution, which makes it an attractive option for robots with limited memory capacity.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
| Subtitle of host publication | IROS 2016 |
| Editors | Dong-Soo Kwon, Chul-Goo Kang, Il Hong Suh |
| Place of Publication | Piscataway, NJ, USA |
| Publisher | IEEE |
| Pages | 3947-3952 |
| ISBN (Print) | 978-1-5090-3762-9 |
| DOIs | |
| Publication status | Published - 2016 |
| Event | 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016 - Daejeon, Korea, Republic of Duration: 9 Oct 2016 → 14 Oct 2016 http://www.iros2016.org/ |
Conference
| Conference | 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016 |
|---|---|
| Abbreviated title | IROS 2016 |
| Country/Territory | Korea, Republic of |
| City | Daejeon |
| Period | 9/10/16 → 14/10/16 |
| Internet address |
Bibliographical note
Accepted Author ManuscriptKeywords
- Databases
- Neural networks
- Training
- Learning (artificial intelligence)
- Standards
- Robot control
Fingerprint
Dive into the research topics of 'Improved deep reinforcement learning for robotics through distribution-based experience retention'. Together they form a unique fingerprint.Research output
- 36 Citations
- 1 Dissertation (TU Delft)
-
Sample effficient deep reinforcement learning for control
de Bruin, T., 2020, 167 p.Research output: Thesis › Dissertation (TU Delft)
Open AccessFile1161 Downloads (Pure)
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver