Multitask Soft Option Learning

Maximilian Igl, Andrew Gambardella, Jinke He, Nantas Nardelli, N Siddharth, Wendelin Böhmer, Shimon Whiteson

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

14 Downloads (Pure)

Abstract

We present Multitask Soft Option Learning (MSOL), a hierarchical multitask framework based on Planning as Inference. MSOL extends the concept of options, using separate variational posteriors for each task, regularized by a shared prior. This “soft” version of options avoids several instabilities during training in a multitask setting, and provides a natural way to learn both intra-option policies and their terminations. Furthermore, it allows fine-tuning of options for new tasks without forgetting their learned policies, leading to faster training without reducing the expressiveness of the hierarchical policy. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines.
Original languageEnglish
Title of host publicationProceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)
Pages969-978
Number of pages10
Volume124
Publication statusPublished - 2020
Event36th Conference on Uncertainty in Artificial Intelligence - Virtual/online event
Duration: 4 Aug 20206 Aug 2020
Conference number: 36

Publication series

NameProceedings of Machine Learning Research

Conference

Conference36th Conference on Uncertainty in Artificial Intelligence
Abbreviated titleUAI 2020
Period4/08/206/08/20

Fingerprint

Dive into the research topics of 'Multitask Soft Option Learning'. Together they form a unique fingerprint.

Cite this