Improved Training of End-to-end Attention Models for Speech Recognition
Citations Over TimeTop 1% of 2018 papers
Abstract
Sequence-to-sequence attention-based models on subword units allow simple open-vocabulary end-to-end speech recognition. In this work, we show that such models can achieve competitive results on the Switchboard 300h and LibriSpeech 1000h tasks. In particular, we report the state-of-the-art word error rates (WER) of 3.54% on the dev-clean and 3.82% on the test-clean evaluation subsets of LibriSpeech. We introduce a new pretraining scheme by starting with a high time reduction factor and lowering it during training, which is crucial both for convergence and final performance. In some experiments, we also use an auxiliary CTC loss function to help the convergence. In addition, we train long short-term memory (LSTM) language models on subword units. By shallow fusion, we report up to 27% relative improvements in WER over the attention baseline without a language model.
Related Papers
- → A Spelling Correction Model for End-to-end Speech Recognition(2019)140 cited
- → HMM-GMM based Amazigh speech recognition system(2020)2 cited
- → Comparing computation in Gaussian mixture and neural network based large-vocabulary speech recognition(2013)2 cited
- → HMM-GMM based Amazigh speech recognition system(2020)1 cited
- → Text Independent Speaker Verficiation Using Dominant State Information of HMM-UBM(2015)