Academia.eduAcademia.edu

Outline

SVM : Reduction of Learning Time

2010, International Journal of Computer Applications

https://doi.org/10.5120/755-992

Abstract

Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large learning tasks with many training examples, off-the-shelf optimization techniques for general quadratic programs quickly become intractable in their memory and time requirements. Here we propose an algorithm which aims at reducing the learning time, this algorithm is based on the decomposition method proposed by Osuna dedicated to optimizing SVMs: it divides the original optimization problem into sub problems computable by the machine in terms of CPU time and memory storage, the obtained solution is in practice more parsimonious than that found by the approach of Osuna in terms of learning time quality, while offering similar performances.

References (4)

  1. V. Vapnik, «The Nature of Statistical Learning Theory». Springer Verlag, New York, 1995.
  2. W. Kuhn et A. W. Tucker. « Nonlinear programming ». In Proc. 2nd Berkeley Symposium on Mathematical Statistics and Probabilistics, pages 481-492, Berkeley, 1951.University of California Press.
  3. C. Cortes et V. Vapnik. « Support vector networks». Machine Learning, 20:1-25, 1995.
  4. E. Osuna, R. Freund, and F. Girosi. «Support vector machines: Training and applications». A.I. Memo 1602, MIT A. I. Lab., 1997.