Abstract
Decentralized multi-robot systems typically perform coordinated motion planning by constantly broadcasting their intentions to avoid collisions. However, the risk of collision between robots varies as they move and communication may not always be needed. This paper presents an efficient communication method that addresses the problem of “when” and “with whom” to communicate in multi-robot collision avoidance scenarios. In this approach, each robot learns to reason about other robots’ states and considers the risk of future collisions before asking for the trajectory plans of other robots. We introduce a new neural architecture for the learned communication policy which allows our method to be scalable. We evaluate and verify the proposed communication strategy in simulation with up to twelve quadrotors, and present results on the zero-shot generalization/robustness capabilities of the policy in different scenarios. We demonstrate that our policy (learned in a simulated environment) can be successfully transferred to real robots.
Original language | English |
---|---|
Pages (from-to) | 1275-1297 |
Number of pages | 23 |
Journal | Autonomous Robots |
Volume | 47 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Aerial robots
- Collision avoidance
- Multi-agent reinforcement learning
- Multi-robot communication
- Multi-robot systems