Recurrent neural networks have shown excellent performance in many applications; however they require increased complexity in hardware or software based implementations. The hardware complexity can be much lowered by minimizing the word-length of weights and signals. This work analyzes the fixed-point performance of recurrent neural networks using a retrain based quantization method. The quantization sensitivity of each layer in RNNs is studied, and the overall fixed-point optimization results minimizing the capacity of weights while not sacrificing the performance are presented. A language model and a phoneme recognition examples are used.
Fixed-point performance analysis of recurrent neural networks
Sungho Shin,Kyuyeon Hwang,Wonyong Sung
Published 2015 in IEEE International Conference on Acoustics, Speech, and Signal Processing
ABSTRACT
PUBLICATION RECORD
- Publication year
2015
- Venue
IEEE International Conference on Acoustics, Speech, and Signal Processing
- Publication date
2015-12-04
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-25 of 25 references · Page 1 of 1
CITED BY
Showing 1-65 of 65 citing papers · Page 1 of 1