Abstract
In this paper we study how to learn stochastic, multimodal transition dynamics in reinforcement learning (RL) tasks. We focus on evaluating transition function estimation, while we defer planning over this model to future work. Stochasticity is a fundamental property of many task environments. However, discriminative function approximators have difficulty estimating multimodal stochasticity. In contrast, deep generative models do capture complex high-dimensional outcome distributions. First we discuss why, amongst such models, conditional variational inference (VI) is theoretically most appealing for model-based RL. Subsequently, we compare different VI models on their ability to learn complex stochasticity on simulated functions, as well as on a typical RL gridworld with multimodal dynamics. Results show VI successfully predicts multimodal outcomes, but also robustly ignores these for deterministic parts of the transition dynamics. In summary, we show a robust method to learn multimodal transitions using function approximation, which is a key preliminary for model-based RL in stochastic domains.
Original language | English |
---|---|
Title of host publication | SURL 2017: 1st Scaling-Up Reinforcement Learning (SURL) Workshop |
Pages | 1-18 |
Number of pages | 18 |
Publication status | Published - 2017 |
Event | SURL 2017: 1st Scaling-Up Reinforcement Learning (SURL) Workshop - Skopje, Macedonia, The Former Yugoslav Republic of Duration: 18 Sept 2017 → 18 Sept 2017 |
Workshop
Workshop | SURL 2017: 1st Scaling-Up Reinforcement Learning (SURL) Workshop |
---|---|
Country/Territory | Macedonia, The Former Yugoslav Republic of |
City | Skopje |
Period | 18/09/17 → 18/09/17 |