Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code
Citations Over TimeTop 1% of 2013 papers
Abstract
In this paper, we propose a new fast speaker adaptation method for the hybrid NN-HMM speech recognition model. The adaptation method depends on a joint learning of a large generic adaptation neural network for all speakers as well as multiple small speaker codes (one per speaker). The joint training method uses all training data along with speaker labels to update adaptation NN weights and speaker codes based on the standard back-propagation algorithm. In this way, the learned adaptation NN is capable of transforming each speaker features into a generic speaker-independent feature space when a small speaker code is given. Adaptation to a new speaker can be simply done by learning a new speaker code using the same back-propagation algorithm without changing any NN weights. In this method, a separate speaker code is learned for each speaker while the large adaptation NN is learned from the whole training set. The main advantage of this method is that the size of speaker codes is very small. As a result, it is possible to conduct a very fast adaptation of the hybrid NN/HMM model for each speaker based on only a small amount of adaptation data (i.e., just a few utterances). Experimental results on TIMIT have shown that it can achieve over 10% relative reduction in phone error rate by using only seven utterances for adaptation.
Related Papers
- → Towards Constructing HMM Structure for Speech Recognition With Deep Neural Fenonic Baseform Growing(2021)3 cited
- → SiamTDNN: Enhancing Discriminative Embeddings for Speaker Diarization(2023)2 cited
- → Hidden neural networks: a framework for HMM/NN hybrids(2002)25 cited
- → On the phonetic structure of a large hidden Markov model(1991)8 cited
- → Speaker Recognition and Diarization(2010)3 cited