Genetic programming methods for reinforcement learning

Research output: Contribution to conferenceAbstractScientific

Abstract

Reinforcement Learning (RL) algorithms can be used to optimally solve dynamic decision-making and control problems. With continuous-valued state and input variables, RL algorithms must rely on function approximators to represent the value function and policy mappings. Commonly used numerical approximators, such as neural networks or basis function expansions, have two main drawbacks: they are black-box models offering no insight in the mappings learnt, and they require significant trial and error tuning of their meta-parameters. In addition, results obtained with deep neural networks suffer from the lack of reproducibility. In this talk, we discuss a family of new approaches to constructing smooth approximators for RL by means of genetic programming and more specifically by symbolic regression. We show how to construct process models and value functions represented by parsimonious analytic expressions using state-of-the-art algorithms, such as Single Node Genetic Programming and Multi-Gene Genetic Programming. We will include examples of nonlinear control problems that can be successfully solved by reinforcement learning with symbolic regression and illustrate some of the challenges this exciting field of research is currently facing.
Original languageEnglish
Number of pages2
DOIs
Publication statusPublished - 2019
Event2019 Genetic and Evolutionary Computation Conference, GECCO 2019 - Prague, Czech Republic
Duration: 13 Jul 201917 Jul 2019

Conference

Conference2019 Genetic and Evolutionary Computation Conference, GECCO 2019
CountryCzech Republic
CityPrague
Period13/07/1917/07/19

Fingerprint

Dive into the research topics of 'Genetic programming methods for reinforcement learning'. Together they form a unique fingerprint.

Cite this