Aligning Human Preferences with Baseline Objectives in Reinforcement Learning

Daniel Marta, Simon Holk, Christian Pek, Jana Tumova, Iolanda Leite

Research output: Contribution to conferencePaperpeer-review

Abstract

Practical implementations of deep reinforcement learning (deep RL) have been challenging due to an amplitude of factors, such as designing reward functions that cover every possible interaction. To address the heavy burden of robot reward engineering, we aim to leverage subjective human preferences gathered in the context of human-robot interaction, while taking advantage of a baseline reward function when available. By considering baseline objectives to be designed beforehand, we are able to narrow down the policy space, solely requesting human attention when their input matters the most. To allow for control over the optimization of different objectives, our approach contemplates a multi-objective setting. We achieve human-compliant policies by sequentially training an optimal policy from a baseline specification and collecting queries on pairs of trajectories. These policies are obtained by training a reward estimator to generate Pareto optimal policies that include human preferred behaviours. Our approach ensures sample efficiency and we conducted a user study to collect real human preferences, which we utilized to obtain a policy on a social navigation environment.
Original languageEnglish
Publication statusPublished - 2023
Externally publishedYes
EventICRA 2023: International Conference on Robotics and Automation - London, United Kingdom
Duration: 29 May 20232 Jun 2023

Conference

ConferenceICRA 2023: International Conference on Robotics and Automation
Country/TerritoryUnited Kingdom
CityLondon
Period29/05/232/06/23

Fingerprint

Dive into the research topics of 'Aligning Human Preferences with Baseline Objectives in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this