Training Very Deep Networks

R. Srivastava,Klaus Greff,J. Schmidhuber

Published 2015 in Neural Information Processing Systems

ABSTRACT

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    Neural Information Processing Systems

  • Publication date

    2015-07-22

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-40 of 40 references · Page 1 of 1

CITED BY

Showing 1-100 of 1735 citing papers · Page 1 of 18