An Incremental Inverse Reinforcement Learning Approach for Motion Planning with Separated Path and Velocity Preferences

S. Avaei, L.F. van der Spaa*, L. Peternel, J. Kober

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

55 Downloads (Pure)

Abstract

Humans often demonstrate diverse behaviors due to their personal preferences, for instance, related to their individual execution style or personal margin for safety. In this paper, we consider the problem of integrating both path and velocity preferences into trajectory planning for robotic manipulators. We first learn reward functions that represent the user path and velocity preferences from kinesthetic demonstration. We then optimize the trajectory in two steps, first the path and then the velocity, to produce trajectories that adhere to both task requirements and user preferences. We design a set of parameterized features that capture the fundamental preferences in a pick-and-place type of object transportation task, both in the shape and timing of the motion. We demonstrate that our method is capable of generalizing such preferences to new scenarios. We implement our algorithm on a Franka Emika 7-DoF robot arm and validate the functionality and flexibility of our approach in a user study. The results show that non-expert users are able to teach the robot their preferences with just a few iterations of feedback.
Original languageEnglish
Article number61
Number of pages22
JournalRobotics
Volume12
Issue number2
DOIs
Publication statusPublished - 2023

Keywords

  • learning from demonstration
  • human preferences
  • incremental inverse reinforcement learning
  • coactive learning
  • physical human–robot interaction

Fingerprint

Dive into the research topics of 'An Incremental Inverse Reinforcement Learning Approach for Motion Planning with Separated Path and Velocity Preferences'. Together they form a unique fingerprint.

Cite this