TY - JOUR
T1 - Decentralized Reinforcement Learning of robot behaviors
AU - Leottau, David L.
AU - Ruiz-del-Solar, Javier
AU - Babuška, Robert
N1 - Accepted Author Manuscript
PY - 2018
Y1 - 2018
N2 - A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.
AB - A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.
KW - Autonomous robots
KW - Decentralized control
KW - Distributed artificial intelligence
KW - Multi-agent systems
KW - Reinforcement learning
UR - http://resolver.tudelft.nl/uuid:ca8f4bdd-643f-4d3f-83af-52195921fec6
UR - http://www.scopus.com/inward/record.url?scp=85038868982&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2017.12.001
DO - 10.1016/j.artint.2017.12.001
M3 - Article
AN - SCOPUS:85038868982
VL - 256
SP - 130
EP - 159
JO - Artificial Intelligence
JF - Artificial Intelligence
SN - 0004-3702
ER -