Learning the speech front-end with raw waveform CLDNNs
Citations Over TimeTop 1% of 2015 papers
Abstract
Learning an acoustic model directly from the raw waveform has been an active area of research.However, waveformbased models have not yet matched the performance of logmel trained neural networks.We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech.Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling.In addition, by stacking raw waveform features with log-mel features, we achieve a 3% relative reduction in word error rate.
Related Papers
- → Text-independent speaker recognition using LSTM-RNN and speech enhancement(2020)68 cited
- → Fusion Multistyle Training for Speaker Identification of Disguised Speech(2018)6 cited
- → Speaker-Independent Speech Recognition using Visual Features(2020)4 cited
- → DICTIONARY APPLICATION WITH SPEECH RECOGNITION AND SPEECH SYNTHESIS(2018)2 cited
- Design of Embed Processor Model of Arbitrary Waveform Generator(2008)