Bayesian Reinforcement Learning in Factored POMDPs

Sammie Katt, Frans A. Oliehoek, Christopher Amato

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

19 Downloads (Pure)

Abstract

Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To address this issue, we introduce the Factored BA-POMDP model (FBA-POMDP), a framework that is able to learn a compact model of the dynamics by exploiting the underlying structure of a POMDP. The FBA-POMDP framework casts the problem as a planning task, for which we adapt the Monte-Carlo Tree Search planning algorithm and develop a belief tracking method to approximate the joint posterior over the state and model variables. Our empirical results show that this method outperforms a number of BRL baselines and is able to learn efficiently when the factorization is known, as well as learn both the factorization and the model parameters simultaneously.
Original languageEnglish
Title of host publicationAAMAS'19
Subtitle of host publicationProceedings of the Eighteenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages7-15
Number of pages9
ISBN (Print)978-1-4503-6309-9
Publication statusPublished - 2019
EventAAMAS 2019: The 18th International Conference on Autonomous Agents and MultiAgent Systems - Montreal, Canada
Duration: 13 May 201917 May 2019

Conference

ConferenceAAMAS 2019
CountryCanada
CityMontreal
Period13/05/1917/05/19

Keywords

  • Bayesian reinforcement learning
  • POMDPs
  • Monte-Chain Monte Carlo
  • Monte-Carlo Tree Search
  • Bayes Networks

Fingerprint Dive into the research topics of 'Bayesian Reinforcement Learning in Factored POMDPs'. Together they form a unique fingerprint.

  • Cite this

    Katt, S., Oliehoek, F. A., & Amato, C. (2019). Bayesian Reinforcement Learning in Factored POMDPs. In AAMAS'19: Proceedings of the Eighteenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (pp. 7-15). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). https://dl.acm.org/citation.cfm?id=3331668