Actor-critic reinforcement learning for tracking control in robotics

Yudha P. Pane, Subramanya P. Nageshrao, Robert Babuska

    Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

    26 Citations (Scopus)

    Abstract

    In this article we provide experimental results and evaluation of a compensation method which improves the tracking performance of a nominal feedback controller by means of reinforcement learning (RL). The compensator is based on the actor-critic scheme and it adds a correction signal to the nominal control input with the goal to improve the tracking performance using on-line learning. The algorithm has been evaluated on a 6 DOF industrial robot manipulator with the objective to accurately track different types of reference trajectories. An extensive experimental study has shown that the proposed RL-based compensation method significantly improves the performance of the nominal feedback controller.

    Original languageEnglish
    Title of host publicationProceedings 2016 IEEE 55th Conference on Decision and Control (CDC)
    EditorsFrancesco Bullo, Christophe Prieur, Alessandro Giua
    Place of PublicationPiscataway, NJ, USA
    PublisherIEEE
    Pages5819-5826
    ISBN (Electronic)978-1-5090-1837-6
    DOIs
    Publication statusPublished - 2016
    Event55th IEEE Conference on Decision and Control, CDC 2016 - Las Vegas, United States
    Duration: 12 Dec 201614 Dec 2016

    Conference

    Conference55th IEEE Conference on Decision and Control, CDC 2016
    Abbreviated titleCDC 2016
    Country/TerritoryUnited States
    CityLas Vegas
    Period12/12/1614/12/16

    Fingerprint

    Dive into the research topics of 'Actor-critic reinforcement learning for tracking control in robotics'. Together they form a unique fingerprint.

    Cite this