SVM : Reduction of Learning Time
2010, International Journal of Computer Applications
https://doi.org/10.5120/755-992…
7 pages
1 file
Sign up for access to the world's latest research
Abstract
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large learning tasks with many training examples, off-the-shelf optimization techniques for general quadratic programs quickly become intractable in their memory and time requirements. Here we propose an algorithm which aims at reducing the learning time, this algorithm is based on the decomposition method proposed by Osuna dedicated to optimizing SVMs: it divides the original optimization problem into sub problems computable by the machine in terms of CPU time and memory storage, the obtained solution is in practice more parsimonious than that found by the approach of Osuna in terms of learning time quality, while offering similar performances.
Related papers
Advances in Parallel Computing, 2004
We consider parallel decomposition techniques for solving the large quadratic programming (QP) problems arising in training support vector machines. A recent technique is improved by introducing an efficient solver for the inner QP subproblems and a preprocessing step useful to hot start the decomposition strategy. The effectiveness of the proposed improvements is evaluated by solving large-scale benchmark problems on different parallel architectures.
2008 IEEE International Conference on Systems, Man and Cybernetics, 2008
Maximizing the classification performance of the training data is a typical procedure in training a classifier. It is well known that training a Support Vector Machine (SVM) requires the solution of an enormous quadratic programming (QP) optimization problem. Serious challenges appeared in the training dilemma due to immense training and this could be solved using Sequential Minimal Optimization (SMO). This paper investigates the performance of SMO solver in term of CPU time, number of support vector and decision boundaries when applied in a 2-dimensional datasets. Next, the chunking algorithm is employed for comparison purpose. Initial results demonstrated that the SMO algorithm could enhance the performance of the training dataset. Both algorithms illustrated similar patterns from the decision boundaries attained. Classification rate achieved by both solvers are superb.
Jurnal Teknologi, 2003
Penyelesaian atur cara kuadratik yang sangat besar diperlukan untuk melatih Support Vector Machine. Tiga cara penyelesaian atur cara kuadratik yang berbeza telah digunakan untuk melaksanakan latihan Support Vector Machine bagi mengkaji keberkesanannya ke atas Support Vector Machine. Prestasi bagi kesemua penyelesaian telah dikaji dan dianalisis dari segi masa pelaksanaan dan kualiti penyelesaian. Kaedah praktikal untuk mengurangkan masa latihan tersebut telah dikaji sepenuhnya. Kata kunci: Support vector machines, atur cara kuadratik Training a Support Vector Machine requires the solution of a very large quadratic programming problem. In order to study the influence of a particular quadratic programming solver on the Support Vector Machine, three different quadratic programming solvers are used to perform the Support Vector Machine training. The performance of these solvers in term of execution time and quality of the solutions are analyzed and compared. A practical method to reduce...
Parallel Computing, 2003
This work is concerned with the solution of the convex quadratic programming problem arising in training the learning machines named support vector machines. The problem is subject to box constraints and to a single linear equality constraint; it is dense and, for many practical applications, it becomes a large-scale problem. Thus, approaches based on explicit storage of the matrix of the quadratic form are not practicable. Here we present an easily parallelizable approach based on a decomposition technique that splits the problem into a sequence of smaller quadratic programming subproblems. These subproblems are solved by a variable projection method that is well suited to a parallel implementation and is very effective in the case of Gaussian support vector machines. Performance results are presented on well known large-scale test problems, in scalar and parallel environments. The numerical results show that the approach is comparable on scalar machines with a widely used technique and can achieve good efficiency and scalability on a multiprocessor system.
Journal of Computer and Communications
The manuscript presents an augmented Lagrangian-fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools.
Pattern Recognition Letters, 2012
This paper presents a method to train a Support Vector Regression (SVR) model for the large-scale case where the number of training samples supersedes the computational resources. The proposed scheme consists of posing the SVR problem entirely as a Linear Programming (LP) problem and on the development of a sequential optimization method based on variables decomposition, constraints decomposition, and the use of primal–dual interior point methods. Experimental results demonstrate that the proposed approach has ...
Expert Systems with Applications, 2014
This paper presents a study of the Quadratic optimization Problem (QP) based on Support Vector Machines (SVM) learning process. Considering the Karush-Kuhn-Tucker (KKT) optimality conditions, we present the strategy of implementation of the SVM-QP following three classical approaches: i) active set, also divided in primal and dual spaces, methods, ii) interior point methods and iii) linearization strategies. We also present the general extension to treat large scale applications consisting in a general decomposition of the QP problem into smaller ones. In the same manner, we discuss some considerations to take into account to start the general learning process. We compare the performances of the optimization strategies using some well-known benchmark databases.
2006
Parallel software for solving the quadratic program arising in training support vector machines for classification problems is introduced. The software implements an iterative decomposition technique and exploits both the storage and the computing resources available on multiprocessor systems, by distributing the heaviest computational tasks of each decomposition iteration. Based on a wide range of recent theoretical advances, relevant decomposition issues, such as the quadratic subproblem solution, the gradient updating, the working set selection, are systematically described and their careful combination to get an effective parallel tool is discussed. A comparison with state-ofthe -art packages on benchmark problems demonstrates the good accuracy and the remarkable time saving achieved by the proposed software. Furthermore, challenging experiments on real-world data sets with millions training samples highlight how the software makes large scale standard nonlinear support vector m...
Optimization Methods and Software, 2005
Gradient projection methods based on the Barzilai-Borwein spectral steplength choices are considered for quadratic programming problems with simple constraints. Well-known nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches the behavior of different combinations of the two spectral steplengths is investigated. A new adaptive steplength alternating rule is proposed that becomes the basis for a generalized version of the variable projection method (GVPM). Convergence results are given for the proposed approach and its effectiveness is shown by means of an extensive computational study on several test problems, including the special quadratic programs arising in training support vector machines. Finally, the GVPM behavior as inner QP solver in decomposition techniques for large-scale support vector machines is also evaluated.
2009
We present results from a comparative empirical study on the performance of two methods for constructing support vector machines (SVMs). The first method is the conventional one based on the quadratic programming approach, which builds the optimal separating hyperplane ...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (4)
- V. Vapnik, «The Nature of Statistical Learning Theory». Springer Verlag, New York, 1995.
- W. Kuhn et A. W. Tucker. « Nonlinear programming ». In Proc. 2nd Berkeley Symposium on Mathematical Statistics and Probabilistics, pages 481-492, Berkeley, 1951.University of California Press.
- C. Cortes et V. Vapnik. « Support vector networks». Machine Learning, 20:1-25, 1995.
- E. Osuna, R. Freund, and F. Girosi. «Support vector machines: Training and applications». A.I. Memo 1602, MIT A. I. Lab., 1997.