The Benefit of Multitask Representation Learning

Andreas Maurer,M. Pontil,Bernardino Romera-Paredes

Published 2015 in Journal of machine learning research

ABSTRACT

We discuss a general method to learn data representations from multiple tasks. We provide a justification for this method in both settings of multitask learning and learning-to-learn. The method is illustrated in detail in the special case of linear feature learning. Conditions on the theoretical advantage offered by multitask representation learning over independent task learning are established. In particular, focusing on the important example of half-space learning, we derive the regime in which multitask representation learning is beneficial over independent task learning, as a function of the sample size, the number of tasks and the intrinsic data dimensionality. Other potential applications of our results include multitask feature learning in reproducing kernel Hilbert spaces and multilayer, deep networks.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    Journal of machine learning research

  • Publication date

    2015-05-23

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-35 of 35 references · Page 1 of 1

CITED BY

Showing 1-100 of 405 citing papers · Page 1 of 5