Recurrent Continuous Translation Models

Nal Kalchbrenner,Phil Blunsom

Published 2013 in Conference on Empirical Methods in Natural Language Processing

ABSTRACT

We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.

PUBLICATION RECORD

  • Publication year

    2013

  • Venue

    Conference on Empirical Methods in Natural Language Processing

  • Publication date

    2013-10-01

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-18 of 18 references · Page 1 of 1

CITED BY

Showing 1-100 of 1463 citing papers · Page 1 of 15