Train faster, generalize better: Stability of stochastic gradient descent

Moritz Hardt,B. Recht,Y. Singer

Published 2015 in International Conference on Machine Learning

ABSTRACT

We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    International Conference on Machine Learning

  • Publication date

    2015-09-03

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-46 of 46 references · Page 1 of 1

CITED BY

Showing 1-100 of 1371 citing papers · Page 1 of 14