A cross-entropy approach to solving Dec-POMDPs

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)

Abstract

In this paper we focus on distributed multiagent planning under uncertainty. For single-agent planning under uncertainty, the partially observable Markov decision process (POMDP) is the dominant model (see [Spaan and Vlassis, 2005] and references therein). Recently, several generalizations of the POMDP to multiagent settings have been proposed. Here we focus on the decentralized POMDP (Dec-POMDP) model for multiagent planning under uncertainty [Bernstein et al., 2002, Goldman and Zilberstein, 2004]. Solving a Dec-POMDP amounts to finding a set of optimal policies for the agents that maximize the expected shared reward. However, solving a Dec-POMDP has proven to be hard (NEXP-complete): The number of possible deterministic policies for a single agent grows doubly exponentially with the planning horizon, and exponentially with the number of actions and observations available. As a result, the focus has shifted to approximate solution techniques [Nair et al., 2003, Emery-Montemerlo et al., 2005, Oliehoek and Vlassis, 2007].

Original languageEnglish
Title of host publicationAdvances in Intelligent and Distributed Computing: Proceedings of the 1st International Symposium on Intelligent and Distributed Computing IDC'2007, Craiova, Romania, October 2007
Pages145-154
Number of pages10
Volume78
DOIs
Publication statusPublished - 2008
Externally publishedYes

Publication series

NameStudies in Computational Intelligence
Volume78
ISSN (Print)1860949X

Fingerprint

Dive into the research topics of 'A cross-entropy approach to solving Dec-POMDPs'. Together they form a unique fingerprint.

Cite this