Very deep convolutional networks for end-to-end speech recognition
2017pp. 4845–4849
Citations Over TimeTop 1% of 2017 papers
Abstract
Sequence-to-sequence models have shown success in end-to-end speech recognition. However these models have only used shallow acoustic encoder networks. In our work, we successively train very deep convolutional networks to add more expressive power and better generalization for end-to-end ASR models. We apply network-in-network principles, batch normalization, residual connections and convolutional LSTMs to build very deep recurrent and convolutional structures. Our models exploit the spectral structure in the feature space and add computational depth without overfitting issues. We experiment with the WSJ ASR task and achieve 10.5% word error rate without any dictionary or language model using a 15 layer deep network.
Related Papers
- → A Spelling Correction Model for End-to-end Speech Recognition(2019)140 cited
- → Speech recognition experiments using multi-span statistical language models(1999)6 cited
- → Regularizing Autoencoder-Based Matrix Completion Models via Manifold Learning(2018)2 cited
- → Deep Learning Based Language Modeling for Domain-Specific Speech Recognition(2017)1 cited
- → Regularizing Autoencoder-Based Matrix Completion Models via Manifold\n Learning(2018)