Within-class covariance normalization for SVM-based speaker recognition
Citations Over TimeTop 1% of 2006 papers
Abstract
This paper extends the within-class covariance normalization (WCCN) technique described in [1, 2] for training generalized linear kernels. We describe a practical procedure for applying WCCN to an SVM-based speaker recognition system where the input feature vectors reside in a high-dimensional space. Our approach involves using principal component analysis (PCA) to split the original feature space into two subspaces: a low-dimensional “PCA space” and a high-dimensional “PCA-complement space.” After performing WCCN in the PCA space, we concatenate the resulting feature vectors with a weighted version of their PCAcomplements. When applied to a state-of-the-art MLLR-SVM speaker recognition system, this approach achieves improvements of up to 22% in EER and 28% in minimum decision cost function (DCF) over our previous baseline. We also achieve substantial improvements over an MLLR-SVM system that performs WCCN in the PCA space but discards the PCA-complement.
Related Papers
- → Gender Recognition from Facial Images using Local Gradient Feature Descriptors(2019)10 cited
- AUTOMATED CLASSIFICATION OF BRAIN MRI USING COLORCONVERTED K-MEANS CLUSTERING SEGMENTATION AND APPLICATION OF DIFFERENT KERNEL FUNCTIONS WITH MULTI-CLASS SVM(2013)
- → A comparison of feature selection methods for machine learning based automatic malarial cell recognition in wholeslide images(2016)19 cited
- Research on Speaker Normalization Based on i-vector in Speech Recognition(2014)