Kernel-Based Feature Extraction with a Speech Technology Application
Citations Over TimeTop 10% of 2004 papers
Abstract
Kernel-based nonlinear feature extraction and classification algorithms are a popular new research direction in machine learning. This paper examines their applicability to the classification of phonemes in a phonological awareness drilling software package. We first give a concise overview of the nonlinear feature extraction methods such as kernel principal component analysis (KPCA), kernel independent component analysis (KICA), kernel linear discriminant analysis (KLDA), and kernel springy discriminant analysis (KSDA). The overview deals with all the methods in a unified framework, regardless of whether they are unsupervised or supervised. The effect of the transformations on a subsequent classification is tested in combination with learning algorithms such as Gaussian mixture modeling (GMM), artificial neural nets (ANN), projection pursuit learning (PPL), decision tree-based classification (C4.5), and support vector machines (SVMs). We found, in most cases, that the transformations have a beneficial effect on the classification performance. Furthermore, the nonlinear supervised algorithms yielded the best results.
Related Papers
- → Nonlinear Projection Trick in Kernel Methods: An Alternative to the Kernel Trick(2013)56 cited
- → Kernel self-optimization learning for kernel-based feature extraction and recognition(2013)16 cited
- Improving Performance of Kernel Principal Component Analysis Using Combination Kernel Functions(2004)
- → Kernel-Based Nonlinear Feature Learning(2020)1 cited
- Based on kernel principal component analysis combined kernel function algorithm(2012)