Recurrent neural network-based language modeling for an automatic Russian speech recognition system
2015Vol. 10, pp. 33–38
Citations Over TimeTop 10% of 2015 papers
Abstract
In the paper, we describe a research of recurrent neural network language models for N-best list rescoring for automatic continuous Russian speech recognition. We tried recurrent neural networks with different number of units in the hidden layer. We achieved the relative word error rate reduction of 14% with respect to the baseline 3-gram model.
Related Papers
- → Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks(2016)146 cited
- → Dynamic Frame Skipping for Fast Speech Recognition in Recurrent Neural Network Based Acoustic Models(2018)10 cited
- → A new recurrent neural network architecture for pattern recognition(1996)8 cited
- → Language Models with RNNs for Rescoring Hypotheses of Russian ASR(2016)3 cited
- → Deep Learning Based Language Modeling for Domain-Specific Speech Recognition(2017)1 cited