Noisy training for deep neural networks in speech recognition

Shi Yin,Chao Liu,Zhiyong Zhang,Yiye Lin,Dong Wang,Javier Tejedor,T. Zheng,Yinguo Li

Published 2015 in EURASIP Journal on Audio, Speech, and Music Processing

ABSTRACT

Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however, may lead to serious over-fitting and hence miserable performance degradation in adverse acoustic conditions such as those with high ambient noises. We propose a noisy training approach to tackle this problem: by injecting moderate noises into the training data intentionally and randomly, more generalizable DNN models can be learned. This ‘noise injection’ technique, although known to the neural computation community already, has not been studied with DNNs which involve a highly complex objective function. The experiments presented in this paper confirm that the noisy training approach works well for the DNN model and can provide substantial performance improvement for DNN-based speech recognition.

PUBLICATION RECORD

  • Publication year

    2015

  • Venue

    EURASIP Journal on Audio, Speech, and Music Processing

  • Publication date

    2015-01-20

  • Fields of study

    Computer Science, Engineering

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-39 of 39 references · Page 1 of 1

CITED BY

Showing 1-100 of 124 citing papers · Page 1 of 2