Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.
|Title of host publication||Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS '22)|
|Editors||Catherine Pelachaud, Matthew E. Taylor|
|Publisher||International Foundation for Autonomous Agents and Multiagent Systems|
|Publication status||Published - 2022|
|Event||AAMAS 2022: 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual) - , New Zealand|
Duration: 9 May 2022 → 13 May 2022
|Conference||AAMAS 2022: 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual)|
|Period||9/05/22 → 13/05/22|
Bibliographical noteGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
- Active Learning
- Inverse Reinforcement Learning
- Multi-Objective Decision-Making
- Value Alignment