Kernel Optimization in Discriminant Analysis
Citations Over TimeTop 10% of 2010 papers
Abstract
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates.
Related Papers
- → Simultaneous Twin Kernel Learning Using Polynomial Transformations for Structured Prediction(2014)1 cited
- → FKNDT: A Flexible Kernel by Negotiating Between Data-dependent Kernel Learning and Task-dependent Kernel Learning(2020)1 cited
- → A new compositional kernel method for multiple kernels(2010)
- → An element-wise kernel learning framework(2022)