Autonomous transfer for reinforcement learning

Matthew E. Taylor,Gregory Kuhlmann,P. Stone

Published 2008 in Adaptive Agents and Multi-Agent Systems

ABSTRACT

Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.

PUBLICATION RECORD

  • Publication year

    2008

  • Venue

    Adaptive Agents and Multi-Agent Systems

  • Publication date

    2008-05-12

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-22 of 22 references · Page 1 of 1

CITED BY

Showing 1-100 of 132 citing papers · Page 1 of 2