Some imitation learning approaches rely on Inverse Reinforcement Learning (IRL) methods, to decode and generalize implicit goals given by expert demonstrations. The study of IRL normally has the assumption of available expert demonstrations, which is not always possible. There are Machine Learning methods that allow non-expert teachers to guide robots to learn complex policies, which eventually fills the expert dependencies of IRL. This work introduces an approach for simultaneously teaching robot policies and objective functions from vague human corrective feedback. The main goal is to generalize the insights that a non-expert human teacher provides to the robot, to unseen conditions, without further need for human effort in the complementary training process. We present an experimental validation of the introduced approach for transfer learning of knowledge to scenarios not considered while the non-expert was teaching. Experimental results show that the learned reward functions obtain similar performance in RL processes compared to engineered reward functions used as baseline, both in simulated and real environments.
|Title of host publication||Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019|
|Place of Publication||Piscataway, NJ, USA|
|Publication status||Published - 2019|
|Event||2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019 - Hong Kong, China|
Duration: 8 Jul 2019 → 12 Jul 2019
|Conference||2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019|
|Period||8/07/19 → 12/07/19|