Reinforcement learning for control: Performance, stability, and deep approximators

Lucian Buşoniu, Tim de Bruin, Domagoj Tolić, Jens Kober, Ivana Palunko

Research output: Contribution to journalReview articlepeer-review

70 Citations (Scopus)

Abstract

Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that – among other things – points out some avenues for bridging the gap between control and artificial-intelligence RL techniques.

Original languageEnglish
Pages (from-to)8-28
JournalAnnual Reviews in Control
Volume46
DOIs
Publication statusPublished - 2018

Keywords

  • Adaptive dynamic programming
  • Deep learning
  • Function approximation
  • Optimal control
  • Reinforcement learning
  • Stability

Fingerprint

Dive into the research topics of 'Reinforcement learning for control: Performance, stability, and deep approximators'. Together they form a unique fingerprint.

Cite this