Discriminative Training for Large-Vocabulary Speech Recognition Using Minimum Classification Error
Citations Over TimeTop 1% of 2006 papers
Abstract
The minimum classification error (MCE) framework for discriminative training is a simple and general formalism for directly optimizing recognition accuracy in pattern recognition problems. The framework applies directly to the optimization of hidden Markov models (HMMs) used for speech recognition problems. However, few if any studies have reported results for the application of MCE training to large-vocabulary, continuous-speech recognition tasks. This article reports significant gains in recognition performance and model compactness as a result of discriminative training based on MCE training applied to HMMs, in the context of three challenging large-vocabulary (up to 100 k word) speech recognition tasks: the Corpus of Spontaneous Japanese lecture speech transcription task, a telephone-based name recognition task, and the MIT Jupiter telephone-based conversational weather information task. On these tasks, starting from maximum likelihood (ML) baselines, MCE training yielded relative reductions in word error ranging from 7% to 20%. Furthermore, this paper evaluates the use of different methods for optimizing the MCE criterion function, as well as the use of precomputed recognition lattices to speed up training. An overview of the MCE framework is given, with an emphasis on practical implementation issues
Related Papers
- → Discriminative training of hierarchical acoustic models for large vocabulary continuous speech recognition(2009)14 cited
- → HMM-GMM based Amazigh speech recognition system(2020)1 cited
- → Performance of hybrid MMI-connectionist/HMM systems on the WSJ speech database(2002)1 cited
- → Discriminative Acoustic Event Recognition In Multimedia Recordings(2011)
- → Text Independent Speaker Verficiation Using Dominant State Information of HMM-UBM(2015)