Deep Reinforcement Learning with Feedback-based Exploration

Jan Scholten, Daan Wout, Carlos Celemin, Jens Kober

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Downloads (Pure)

Abstract

Deep Reinforcement Learning has enabled the control of increasingly complex and high-dimensional problems. However, the need of vast amounts of data before reasonable performance is attained prevents its widespread application. We employ binary corrective feedback as a general and intuitive manner to incorporate human intuition and domain knowledge in model-free machine learning. The uncertainty in the policy and the corrective feedback is combined directly in the action space as probabilistic conditional exploration. As a result, the greatest part of the otherwise ignorant learning process can be avoided. We demonstrate the proposed method, Predictive Probabilistic Merging of Policies (PPMP), in combination with DDPG. In experiments on continuous control problems of the OpenAI Gym, we achieve drastic improvements in sample efficiency, final performance, and robustness to erroneous feedback, both for human and synthetic feedback. Additionally, we show solutions beyond the demonstrated knowledge.

Original languageEnglish
Title of host publicationProceedings of the IEEE 58th Conference on Decision and Control, CDC 2019
PublisherIEEE
Pages803-808
ISBN (Electronic)978-1-7281-1398-2
DOIs
Publication statusPublished - 2020
Event58th IEEE Conference on Decision and Control, CDC 2019 - Nice, France
Duration: 11 Dec 201913 Dec 2019

Conference

Conference58th IEEE Conference on Decision and Control, CDC 2019
CountryFrance
CityNice
Period11/12/1913/12/19

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning with Feedback-based Exploration'. Together they form a unique fingerprint.

Cite this