MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

110 Downloads (Pure)

Abstract

Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.
Original languageEnglish
Title of host publicationProceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS '22)
EditorsCatherine Pelachaud, Matthew E. Taylor
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems
Pages1038-1046
ISBN (Print)978-1-4503-9213-6
Publication statusPublished - 2022
EventAAMAS 2022: 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual) - , New Zealand
Duration: 9 May 202213 May 2022

Conference

ConferenceAAMAS 2022: 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual)
Country/TerritoryNew Zealand
Period9/05/2213/05/22

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Active Learning
  • Inverse Reinforcement Learning
  • Multi-Objective Decision-Making
  • Value Alignment

Fingerprint

Dive into the research topics of 'MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning'. Together they form a unique fingerprint.

Cite this