Reinforcement learning for control: Performance, stability, and deep approximators

Lucian Buşoniu*, Tim de Bruin, Domagoj Tolić, Jens Kober, Ivana Palunko

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

247 Citations (Scopus)
106 Downloads (Pure)


Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that – among other things – points out some avenues for bridging the gap between control and artificial-intelligence RL techniques.

Original languageEnglish
Pages (from-to)8-28
JournalAnnual Reviews in Control
Publication statusPublished - 2018

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.


  • Adaptive dynamic programming
  • Deep learning
  • Function approximation
  • Optimal control
  • Reinforcement learning
  • Stability


Dive into the research topics of 'Reinforcement learning for control: Performance, stability, and deep approximators'. Together they form a unique fingerprint.

Cite this