The Role of Pretrained Representations for the OOD Generalization of RL Agents

Andrea Dittadi,Frederik Trauble,M. Wuthrich,F. Widmaier,Peter Gehler,O. Winther,Francesco Locatello,Olivier Bachem,B. Scholkopf,Stefan Bauer

Published 2021 in International Conference on Learning Representations

ABSTRACT

Building sample-efficient agents that generalize out-of-distribution (OOD) in real-world settings remains a fundamental unsolved problem on the path towards achieving higher-level cognition. One particularly promising approach is to begin with low-dimensional, pretrained representations of our world, which should facilitate efficient downstream learning and generalization. By training 240 representations and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents. We observe that many agents are surprisingly robust to realistic distribution shifts, including the challenging sim-to-real case. In addition, we find that the generalization performance of a simple downstream proxy task reliably predicts the generalization performance of our RL agents under a wide range of OOD settings. Such proxy tasks can thus be used to select pretrained representations that will lead to agents that generalize.

PUBLICATION RECORD

  • Publication year

    2021

  • Venue

    International Conference on Learning Representations

  • Publication date

    2021-07-12

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-78 of 78 references · Page 1 of 1

CITED BY

Showing 1-19 of 19 citing papers · Page 1 of 1