A Unifying Framework for Reinforcement Learning and Planning

Thomas M. Moerland*, Joost Broekens, Aske Plaat, Catholijn M. Jonker

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
73 Downloads (Pure)

Abstract

Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem, then we might be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying algorithmic framework for reinforcement learning and planning (FRAP), which identifies underlying dimensions on which MDP planning and learning algorithms have to decide. At the end of the paper, we compare a variety of well-known planning, model-free and model-based RL algorithms along these dimensions. Altogether, the framework may help provide deeper insight in the algorithmic design space of planning and reinforcement learning.

Original languageEnglish
Article number908353
Number of pages25
JournalFrontiers in Artificial Intelligence
Volume5
DOIs
Publication statusPublished - 2022

Keywords

  • framework
  • model-based reinforcement learning
  • overview
  • planning
  • reinforcement learning
  • synthesis

Fingerprint

Dive into the research topics of 'A Unifying Framework for Reinforcement Learning and Planning'. Together they form a unique fingerprint.

Cite this