Personalized speech recognition on mobile devices
Citations Over TimeTop 1% of 2016 papers
Abstract
We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.
Related Papers
- → HMM-GMM based Amazigh speech recognition system(2020)2 cited
- → Comparing computation in Gaussian mixture and neural network based large-vocabulary speech recognition(2013)2 cited
- → HMM-GMM based Amazigh speech recognition system(2020)1 cited
- → Performance of hybrid MMI-connectionist/HMM systems on the WSJ speech database(2002)1 cited
- → Text Independent Speaker Verficiation Using Dominant State Information of HMM-UBM(2015)