Speaker adaptation through speaker specific compensation
Abstract
This paper describes a new speaker adaptation strategy that we term speaker specific compensation. The basic idea is to transform speech of a speaker in a way that renders it recognizable by a speaker dependent classifier built for another speaker. The compensating filter is learnt as a cepstral vector using labeled speech samples of the speaker. Using some ideas about combining multiple pattern classifiers, we present a new speaker independent speech recognition system that uses a few speaker dependent classifiers along with a bank of cepstral compensating vectors learnt for a large number of other speakers. Each of the speaker dependent classifiers is trained on the given speech samples of only one speaker and is never retrained or adapted thereafter. We present some results to illustrate the effectiveness of this speaker specific compensation idea.
Related Papers
- → An Information Theoretic Combination of MFCC and TDOA Features for Speaker Diarization(2010)30 cited
- → Infant cry recognition based on feature extraction(2010)3 cited
- → Pitch-based cepstral features for gender classification in noisy environments(2013)1 cited
- A Robust Mel-frequency Cepstrum Coefficients(2008)
- Application of Biomimetic Technology to Feature Extraction from Acoustic Objects(2014)