The key difficulty of cooperative, decentralized planning lies in making accurate predictions about the behavior of one’s teammates. In this paper we introduce a planning method of Alternating maximization with Behavioural Cloning (ABC) – a trainable online decentralized planning algorithm based on Monte Carlo Tree Search (MCTS), combined with models of teammates learned from previous episodic runs. Our algorithm relies on the idea of alternating maximization, where agents adapt their models one at a time in round-robin manner. Under the assumption of perfect policy cloning, and with a sufficient amount of Monte Carlo samples, successive iterations of our method are guaranteed to improve joint policies, and eventually converge.
|Title of host publication||BNAIC/BeneLearn 2020|
|Editors||Lu Cao, Walter Kosters, Jefrey Lijffijt|
|Publication status||Published - 2020|
|Event||BNAIC/BENELEARN 2020 - Leiden, Netherlands|
Duration: 19 Nov 2020 → 20 Nov 2020
|Period||19/11/20 → 20/11/20|