Morphological Inflection Generation with Hard Monotonic Attention

Roee Aharoni,Yoav Goldberg

Published 2016 in Annual Meeting of the Association for Computational Linguistics

ABSTRACT

We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft (Bahdanau, 2014) attention models for the task, shedding some light on the features such models extract.

PUBLICATION RECORD

  • Publication year

    2016

  • Venue

    Annual Meeting of the Association for Computational Linguistics

  • Publication date

    2016-11-04

  • Fields of study

    Linguistics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-43 of 43 references · Page 1 of 1

CITED BY

Showing 1-100 of 132 citing papers · Page 1 of 2