Abstract
This article describes the MultiAgent Decision Process (MADP) toolbox, a software library to support planning and learning for intelligent agents and multiagent systems in uncertain environments. Key features are that it supports partially observable environments and stochastic transition models; has unified support for single- and multiagent systems; provides a large number of models for decision-theoretic decision making, including one-shot and sequential decision making under various assumptions of observability and cooperation, such as Dec-POMDPs and POSGs; provides tools and parsers to quickly prototype new problems; provides an extensive range of planning and learning algorithms for single- and multiagent systems; it is released under a GNU GPL v3 license; and is written in C++ and designed to be extensible via the object-oriented paradigm.
Original language | English |
---|---|
Pages (from-to) | 1-5 |
Number of pages | 5 |
Journal | Journal of Machine Learning Research |
Volume | 18 |
Issue number | 89 |
Publication status | Published - Aug 2017 |
Keywords
- software
- decision-theoretic planning
- reinforcement learning
- multiagent systems