Abstract
Reinforcement learning (RL) is a suitable approach for controlling systems with unknown or time-varying dynamics. RL in principle does not require a model of the system, but before it learns an acceptable policy, it needs many unsuccessful trials, which real robots usually cannot withstand. It is well known that RL can be sped up and made safer by using models learned online. In this paper, we propose to use symbolic regression to construct compact, parsimonious models described by analytic equations, which are suitable for realtime robot control. Single node genetic programming (SNGP) is employed as a tool to automatically search for equations fitting the available data. We demonstrate the approach on two benchmark examples: a simulated mobile robot and the pendulum swing-up problem; the latter both in simulations and real-time experiments. The results show that through this approach we can find accurate models even for small batches of training data. Based on the symbolic model found, RL can control the system well
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2018) |
Editors | Kevin Lynch |
Place of Publication | Piscataway, NJ, USA |
Publisher | IEEE |
Pages | 5105-5112 |
ISBN (Electronic) | 978-1-5386-3081-5 |
DOIs | |
Publication status | Published - 2018 |
Event | ICRA 2018: 2018 IEEE International Conference on Robotics and Automation - Brisbane Convention & Exhibition Centre, Brisbane, Australia Duration: 21 May 2018 → 25 May 2018 |
Conference
Conference | ICRA 2018: 2018 IEEE International Conference on Robotics and Automation |
---|---|
Country/Territory | Australia |
City | Brisbane |
Period | 21/05/18 → 25/05/18 |
Keywords
- Model learning for control
- AI-based methods
- symbolic regression
- reinforcement learning
- optimal control