Attention-Based Models for Speech Recognition

J. Chorowski,Dzmitry Bahdanau,Dmitriy Serdyuk,Kyunghyun Cho,Yoshua Bengio

Published 2015 in Neural Information Processing Systems

ABSTRACT

Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1,2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7% phoneme error rate (PER) on the TIMET phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    Neural Information Processing Systems

  • Publication date

    2015-06-24

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-34 of 34 references · Page 1 of 1

CITED BY

Showing 1-100 of 2733 citing papers · Page 1 of 28