Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO

Yangkun Chen, Chenghui Yu, Hengman Zhu, Shuai Liu, Yibing Zhang, Joseph Suarez, Liang Zhao, Jinke He, Jiaxin Chen, More Authors

Research output: Contribution to journalConference articleScientificpeer-review

72 Downloads (Pure)

Abstract

We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions. This competition targets robustness and generalization in multi-agent systems: participants train teams of agents to complete a multi-task objective against opponents not seen during training. We summarize the competition design and results and suggest that, considering our work as a case study, competitions are an effective approach to solving hard problems and establishing a solid benchmark for algorithms. We will open-source our benchmark including the environment wrapper, baselines, a visualization tool, and selected policies for further research.

Original languageEnglish
Pages (from-to)2490-2492
Number of pages3
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume2023-May
Publication statusPublished - 2023
Event22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom
Duration: 29 May 20232 Jun 2023

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Benchmark
  • Competition
  • Multi-agent Reinforcement Learning

Fingerprint

Dive into the research topics of 'Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO'. Together they form a unique fingerprint.

Cite this