Efficient WFST-Based One-Pass Decoding With On-The-Fly Hypothesis Rescoring in Extremely Large Vocabulary Continuous Speech Recognition
Citations Over TimeTop 10% of 2007 papers
Abstract
This paper proposes a novel one-pass search algorithm with on-the-fly composition of weighted finite-state transducers (WFSTs) for large-vocabulary continuous-speech recognition. In the standard search method with on-the-fly composition, two or more WFSTs are composed during decoding, and a Viterbi search is performed based on the composed search space. With this new method, a Viterbi search is performed based on the first of the two WFSTs. The second WFST is only used to rescore the hypotheses generated during the search. Since this rescoring is very efficient, the total amount of computation required by the new method is almost the same as when using only the first WFST. In a 65k-word vocabulary spontaneous lecture speech transcription task, our proposed method significantly outperformed the standard search method. Furthermore, our method was faster than decoding with a single fully composed and optimized WFST, where our method used only 38% of the memory required for decoding with the single WFST. Finally, we have achieved high-accuracy one-pass real-time speech recognition with an extremely large vocabulary of 1.8 million words
Related Papers
- Gesture Classification Using Hidden Markov Models and Viterbi Path Counting(2003)
- → A Viterbi algorithm for a trajectory model derived from HMM with explicit relationship between static and dynamic features(2004)27 cited
- → HMM with global path constraint in Viterbi decoding for isolated word recognition(2002)5 cited
- → Information Extraction from Chinese Papers Based on Hidden Markov Model(2013)2 cited
- → Parallel Processing Based Power Reduction in a 256 State Viterbi Decoder(2006)