Weighting Finite-State Transductions With Neural Context

Pushpendre Rastogi,Ryan Cotterell,Jason Eisner

Published 2016 in North American Chapter of the Association for Computational Linguistics

ABSTRACT

How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-state transducer to score all possible output strings , but to augment the scoring function with the help of recurrent networks. A stack of bidirectional LSTMs reads the input string from left-to-right and right-to-left, in order to summarize the input context in which a transducer arc is applied. We combine these learned features with the transducer to define a probability distribution over aligned output strings, in the form of a weighted finite-state automaton. This reduces hand-engineering of features, allows learned features to examine unbounded context in the input string, and still permits exact inference through dynamic programming. We illustrate our method on the tasks of morphological reinflection and lemmatization.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    North American Chapter of the Association for Computational Linguistics

  • Publication date

    2016-06-01

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-42 of 42 references · Page 1 of 1

CITED BY

Showing 1-80 of 80 citing papers · Page 1 of 1