LTL2Action: Generalizing LTL Instructions for Multi-Task RL

Pashootan Vaezipoor,Andrew C. Li,Rodrigo Toro Icarte,Sheila A. McIlraith

Published 2021 in International Conference on Machine Learning

ABSTRACT

We address the problem of teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments. Instructions are expressed in a well-known formal language -- linear temporal logic (LTL) -- and can specify a diversity of complex, temporally extended behaviours, including conditionals and alternative realizations. Our proposed learning approach exploits the compositional syntax and the semantics of LTL, enabling our RL agent to learn task-conditioned policies that generalize to new instructions, not observed during training. To reduce the overhead of learning LTL semantics, we introduce an environment-agnostic LTL pretraining scheme which improves sample-efficiency in downstream environments. Experiments on discrete and continuous domains target combinatorial task sets of up to $\sim10^{39}$ unique tasks and demonstrate the strength of our approach in learning to solve (unseen) tasks, given LTL instructions.

PUBLICATION RECORD

  • Publication year

    2021

  • Venue

    International Conference on Machine Learning

  • Publication date

    2021-02-13

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-60 of 60 references · Page 1 of 1

CITED BY

Showing 1-100 of 100 citing papers · Page 1 of 1