A Framework for Reinforcement Learning and Planning: Extended Abstract

Research output: Chapter in Book/Conference proceedings/Edited volumeChapterScientificpeer-review

Abstract

Sequential decision making, commonly formalized as Markov Decision Process optimiza-tion, is a key challenge in artificial intelligence. Two successful approaches to MDP opti-mization are planning and reinforcement learning. Both research fields largely have their own research communities. However, if both research fields solve the same problem, then we should be able to disentangle the common factors in their solution approaches. Therefore,this paper presents a unifying framework for reinforcement learning and planning (FRAP),which identifies the underlying dimensions on which any planning or learning algorithm has to decide. At the end of the paper, we compare - in a single table - a variety of well-known planning, model-free and model-based RL algorithms along the dimensions of our frame-work, illustrating the validity of the framework. Altogether, FRAP provides deeper insight into the algorithmic space of planning and reinforcement learning, and also suggests new approaches to integration of both fields
Original languageEnglish
Title of host publicationICAPS: PRL 2020
Subtitle of host publicationProceedings of the 1st Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL)
EditorsAlan Fern, Vicenc Gomez, Anders Jonsson, Michael Katz, Hector Palacios, Scott Sanner
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages50-52
Number of pages3
Publication statusPublished - 2020
EventICAPS 2020: 30th International Conference on Automated Planning and Scheduling - Virtual Nancy, France
Duration: 19 Oct 202030 Oct 2020
Conference number: 30th

Conference

ConferenceICAPS 2020
CountryFrance
CityVirtual Nancy
Period19/10/2030/10/20

Fingerprint

Dive into the research topics of 'A Framework for Reinforcement Learning and Planning: Extended Abstract'. Together they form a unique fingerprint.

Cite this