Convolutional Sequence to Sequence Learning

Jonas Gehring,Michael Auli,David Grangier,Denis Yarats,Yann Dauphin

Published 2017 in International Conference on Machine Learning

ABSTRACT

The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.

PUBLICATION RECORD

  • Publication year

    2017

  • Venue

    International Conference on Machine Learning

  • Publication date

    2017-05-08

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-49 of 49 references · Page 1 of 1

CITED BY

Showing 1-100 of 3491 citing papers · Page 1 of 35