Dealing With Groups of Actions in Multiagent Markov Decision Processes

Guillaume Debras,A. Mouaddib,L. Jeanpierre,Simon Le Gloannec

Published 2016 in International Joint Conference on Computational Intelligence

ABSTRACT

Multiagent Markov Decision Processes (MMDPs) provide a useful framework for multiagent decision making. Finding solutions to large-scale problems or with a large number of agents however, has been proven to be computationally hard. In this paper, we adapt H-(PO)MDPs to multi-agent settings by proposing a new approach using action groups to decompose an initial MMDP into a set of dependent Sub-MMDPs where each action group is assigned a corresponding Sub-MMDP. Sub-MMDPs are then solved using a parallel Bellman backup to derive local policies which are synchronized by propagating local results and updating the value functions locally and globally to take the dependencies into account. This decomposition allows, for example, specific aggregation for each sub-MMDP, which we adapt by using a novel value function update. Experimental evaluations have been developed and applied to real robotic platforms showing promising results and validating our techniques.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    International Joint Conference on Computational Intelligence

  • Publication date

    Unknown publication date

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-19 of 19 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1