Pay Less Attention with Lightweight and Dynamic Convolutions

Felix Wu,Angela Fan,Alexei Baevski,Yann Dauphin,Michael Auli

Published 2019 in International Conference on Learning Representations

ABSTRACT

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

PUBLICATION RECORD

  • Publication year

    2019

  • Venue

    International Conference on Learning Representations

  • Publication date

    2019-01-29

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-62 of 62 references · Page 1 of 1

CITED BY

Showing 1-100 of 654 citing papers · Page 1 of 7