Abstract
Deep reinforcement learning (RL) has been successfully applied to a variety of game-like environments. However, the application of deep RL to visual navigation with realistic environments is a challenging task. We propose a novel learning architecture capable of navigating an agent, e.g. a mobile robot, to a target given by an image. To achieve this, we have extended the batched A2C algorithm with auxiliary tasks designed to improve visual navigation performance. We propose three additional auxiliary tasks: predicting the segmentation of the observation image and of the target image and predicting the depth-map. These tasks enable the use of supervised learning to pre-train a major part of the network and to reduce the number of training steps substantially. The training performance has been further improved by increasing the environment complexity gradually over time. An efficient neural network structure is proposed, which is capable of learning for multiple targets in multiple environments. Our method navigates in continuous state spaces and on the AI2-THOR environment simulator surpasses the performance of state-of-the-art goal-oriented visual navigation methods from the literature.
Original language | English |
---|---|
Title of host publication | Proceedings of the European Conference on Mobile Robots (ECMR 2019) |
Editors | Libor Preucil, Sven Behnke, Miroslav Kulich |
Place of Publication | Piscataway, NJ, USA |
Publisher | IEEE |
Number of pages | 8 |
ISBN (Electronic) | 978-1-7281-3605-9 |
DOIs | |
Publication status | Published - 2019 |
Event | ECMR 2019: European Conference on Mobile Robots - Prague, Czech Republic Duration: 4 Sept 2019 → 6 Sept 2019 |
Conference
Conference | ECMR 2019: European Conference on Mobile Robots |
---|---|
Country/Territory | Czech Republic |
City | Prague |
Period | 4/09/19 → 6/09/19 |
Bibliographical note
Accepted Author ManuscriptKeywords
- Actor-critic
- Auxiliary tasks
- Deep reinforcement learning
- Robot navigation