Benchmarking model-free and model-based optimal control

Ivan Koryakovskiy, Manuel Kudruss*, Robert Babuška, Wouter Caarls, Christian Kirches, Katja Mombaur, Johannes P. Schlöder, Heike Vallery

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

18 Citations (Scopus)
256 Downloads (Pure)


Model-free reinforcement learning and nonlinear model predictive control are two different approaches for controlling a dynamic system in an optimal way according to a prescribed cost function. Reinforcement learning acquires a control policy through exploratory interaction with the system, while nonlinear model predictive control exploits an explicitly given mathematical model of the system. In this article, we provide a comprehensive comparison of the performance of reinforcement learning and nonlinear model predictive control for an ideal system as well as for a system with parametric and structural uncertainties. The comparison is based on two different criteria, namely the similarity of trajectories and the resulting rewards. The evaluation of both methods is performed on a standard benchmark problem: a cart–pendulum swing-up and balance task. We first find suitable mathematical formulations and discuss the effect of the differences in the problem formulations. Then, we investigate the robustness of reinforcement learning and nonlinear model predictive control against uncertainties. The results demonstrate that nonlinear model predictive control has advantages over reinforcement learning if uncertainties can be eliminated through identification of the system parameters. Otherwise, there exists a break-even point after which model-free reinforcement learning performs better than nonlinear model predictive control with an inaccurate model. These findings suggest that benefits can be obtained by combining these methods for real systems being subject to such uncertainties. In the future, we plan to develop a hybrid controller and evaluate its performance on a real seven-degree-of-freedom walking robot.

Original languageEnglish
Pages (from-to)81-90
JournalRobotics and Autonomous Systems
Publication statusPublished - 2017

Bibliographical note

Accepted Author Manuscript


  • Nonlinear model predictive control
  • Optimal control
  • Parametric uncertainties
  • Reinforcement learning
  • Structural uncertainties


Dive into the research topics of 'Benchmarking model-free and model-based optimal control'. Together they form a unique fingerprint.

Cite this