Actor-critic reinforcement learning for tracking control in robotics

Yudha P. Pane, Subramanya P. Nageshrao, Robert Babuska

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

12 Citations (Scopus)


In this article we provide experimental results and evaluation of a compensation method which improves the tracking performance of a nominal feedback controller by means of reinforcement learning (RL). The compensator is based on the actor-critic scheme and it adds a correction signal to the nominal control input with the goal to improve the tracking performance using on-line learning. The algorithm has been evaluated on a 6 DOF industrial robot manipulator with the objective to accurately track different types of reference trajectories. An extensive experimental study has shown that the proposed RL-based compensation method significantly improves the performance of the nominal feedback controller.

Original languageEnglish
Title of host publicationProceedings 2016 IEEE 55th Conference on Decision and Control (CDC)
EditorsFrancesco Bullo, Christophe Prieur, Alessandro Giua
Place of PublicationPiscataway, NJ, USA
ISBN (Electronic)978-1-5090-1837-6
Publication statusPublished - 2016
Event55th IEEE Conference on Decision and Control, CDC 2016 - Las Vegas, United States
Duration: 12 Dec 201614 Dec 2016


Conference55th IEEE Conference on Decision and Control, CDC 2016
Abbreviated titleCDC 2016
CountryUnited States
CityLas Vegas

Fingerprint Dive into the research topics of 'Actor-critic reinforcement learning for tracking control in robotics'. Together they form a unique fingerprint.

Cite this