Fast SVM training algorithm with decomposition on very large data sets
Citations Over TimeTop 1% of 2005 papers
Abstract
Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, SVMlight, and SVMTorch. Moreover, the good generalization performances on several large databases have also been achieved.
Related Papers
- → Training Systems Concept for the Armored Family of Vehicles with Consideration of the Roles of Embedded Training and Stand-Alone Training Devices(1988)1 cited
- Hydroelectric Construction Jobs and related Training | Hydro Northern Training Initiative | Manitoba Competitiveness, Training and Trade(2004)
- 介護実習IIにおける実習計画表の活用の検討 : 個別援助技術実習と介護総合実習との比較 ; Consideration of utilization of a training agenda in care practical training II - The comparative of individual assistance technical training and total practice training -(2016)
- Effects of Training Input on Training Performance of Airline Service Training - Focused on Mediating Role of Training Process -(2012)