Decentralized reinforcement learning applied to mobile robots

David L. Leottau*, Aashish Vatsyayan, Javier Ruiz-del-Solar, Robert Babuška

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

6 Citations (Scopus)

Abstract

In this paper, decentralized reinforcement learning is applied to a control problem with a multidimensional action space. We propose a decentralized reinforcement learning architecture for a mobile robot, where the individual components of the commanded velocity vector are learned in parallel by separate agents. We empirically demonstrate that the decentralized architecture outperforms its centralized counterpart in terms of the learning time, while using less computational resources. The method is validated on two problems: an extended version of the 3-dimensional mountain car, and a ball-pushing behavior performed with a differential-drive robot, which is also tested on a physical setup.

Original languageEnglish
Title of host publicationRoboCup 2016: Robot World Cup XX
EditorsSven Behnke, Raymond Sheh, Sanem Sarıel, Daniel D. Lee
Place of PublicationCham, Switzerland
PublisherSpringer
Pages368-379
Volume9776 LNAI
ISBN (Electronic)978-3-319-68792-6
ISBN (Print)978-3-319-68791-9
DOIs
Publication statusPublished - 2017
EventRoboCup 2016: 20th Annual RoboCup International Symposium - Leipzig, Germany
Duration: 30 Jun 20164 Jul 2016

Publication series

NameLecture Notes in Artificial Intelligence
Volume9776
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceRoboCup 2016: 20th Annual RoboCup International Symposium
Country/TerritoryGermany
CityLeipzig
Period30/06/164/07/16

Keywords

  • Decentralized control
  • Multiagent learning
  • Reinforcement learning
  • Robot soccer

Fingerprint

Dive into the research topics of 'Decentralized reinforcement learning applied to mobile robots'. Together they form a unique fingerprint.

Cite this