Emergent Coordination Through Competition

Siqi Liu,Guy Lever,J. Merel,S. Tunyasuvunakool,N. Heess,T. Graepel

Published 2019 in International Conference on Learning Representations

ABSTRACT

We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.

PUBLICATION RECORD

  • Publication year

    2019

  • Venue

    International Conference on Learning Representations

  • Publication date

    2019-02-19

  • Fields of study

    Physics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-45 of 45 references · Page 1 of 1

CITED BY

Showing 1-100 of 158 citing papers · Page 1 of 2