Support Vector Regression with Automatic Accuracy Control
1998, Perspectives in Neural Computing
https://doi.org/10.1007/978-1-4471-1599-1_12…
6 pages
1 file
Sign up for access to the world's latest research
Abstract
A new algorithm for Support Vector regression is proposed. For a priori chosen , it automatically adjusts a exible tube of minimal radius to the data such that at most a fraction of the data points lie outside. The algorithm is analysed theoretically and experimentally.
Related papers
2011 1st International eConference on Computer and Knowledge Engineering (ICCKE), 2011
The epsilon-SVR has two limitations. Firstly, the tube radius (epsilon) or noise rate along the -axis must be already specified. Secondly, this method is suitable for function estimation according to training data in which noise is independent of input (is constant). To resolving these limitations, in approaches like -SVIRN, the tube radius or the radius of estimated interval function which can be variable with respect to input , is determined automatically. Then, for the test sample , the centre of interval function is reported as the most probable value of output according to training samples. This method is useful when the noise of data along the -axis has a symmetric distribution. In such situation, the centre of interval function and the most probable value of function are identical. In practice, the noise of data along the -axis may be from an asymmetric distribution. In this paper, we propose a novel approach which estimates simultaneously an interval function and a triangular fuzzy function. The estimated interval function of our proposed method is similar to the estimated function of -SVIRN. The center of triangular fuzzy function is the most probable value of function according to training samples which is important when the noise of training data along the -axis is from an asymmetric distribution.
1999
In this report we show that the -tube size in Support Vector Machine (SVM) for regression is 2 = p 1 + jjwjj 2 . By using this result we show that, in the case all the data points are inside the -tube, minimizing jjwjj 2 in SVM for regression is equivalent to maximizing the distance between the approximating hyperplane and the farest points in the training set. Moreover, in the most general setting in which the data points live also outside the -tube, we show that, for a xed value of , minimizing jjwjj 2 is equivalent to maximizing the sparsity of the representation of the optimal approximating hyperplane, that is equivalent to minimizing the number of coe cients di erent from zero in the expression of the optimal w. Then, the solution found by SVM for regression is a tradeo between sparsity of the representation and closeness to the data. We also include a complete derivation of SVM for regression in the case of linear approximation.
Computational Statistics & Data Analysis, 2014
International Journal of Advanced Manufacturing Technology, 2008
Several methods have been investigated to determine the deviation of manufactured spherical parts from ideal geometry. One of the most popular is the least squares technique, which is still widely employed in coordinate measuring machines used by industries. The least squares algorithm is optimal under the assumption that the data set is very large and has the inherent disadvantage of overestimating the minimum tolerance zone, resulting sometimes in the rejection of good parts. In addition, it requires that the data be distributed normally. The support vector regression approach alleviates the necessity for these assumptions. While most fitting algorithms in practice today require that the sampled data accurately represent the surface being inspected, support vector regression provides a generalization over the surface. We describe how the concepts of support vector regression can be applied to the determination of tolerance zones of nonlinear surfaces; to demonstrate the unique potential of support vector machine algorithms in the area of coordinate metrology. In specific, we address part quality inspection of spherical geometries.
Applied Soft Computing, 2020
We propose a novel convex loss function termed as 'ϵ-penalty loss function', to be used in Support Vector Regression (SVR) model. The proposed ϵ-penalty loss function is shown to be optimal for a more general noise distribution. The popular ϵ-insensitive loss function and the Laplace loss function are particular cases of the proposed loss function. Making the use of the proposed loss function, we have proposed two new Support Vector Regression models in this paper. The first model which we have termed with 'ϵ-Penalty Support Vector Regression' (ϵ-PSVR) model minimizes the proposed loss function with L 2-norm regularization. The second model minimizes the proposed loss function with L 1-Norm regularization and has been termed as 'L 1-Norm Penalty Support Vector Regression' (L 1-Norm PSVR) model. The proposed loss function can offer different rates of penalization inside and outside of the ϵ-tube. This strategy enables the proposed SVR models to use the full information of the training set which make them to generalize well. Further, the numerical results obtained from the experiments carried out on various artificial, benchmark datasets and financial time series datasets show that the proposed SVR models own better generalization ability than existing SVR models.
Genetic and Evolutionary Computation Conference, 2007
In this paper we introduce XCSF with support vector prediction:the problem of learning the prediction function is solved as a support vector regression problem and each classifier exploits a Support Vector Machine to compute the prediction. In XCSF with support vector prediction, XCSFsvm, the genetic algorithm adapts classifier conditions, classifier actions, and the SVM kernel parameters.We compare XCSF with support
In this work, a regression problem is studied where the elements of the database are sets with certain geometrical properties. In particular, our model can be applied to handle data affected by some kind of noise or uncertainty and interval-valued data, and databases with missing values as well. The proposed formulation is based on the standard -Support Vector Regression approach. In the interval-data case, two different formulations will be obtained, accord-ing to the way of measuring the distance between the prediction and the actual intervals. Computational experiments with real databases are performed.
2014 IEEE International Conference on System Science and Engineering (ICSSE), 2014
The computational reduction by sequential minimal optimization (SMO) is crucial for support vector regression (SVR) with large-scale function approximation. Due to the importance, the paper surveys broadly the relevant researches, digests their essentials, and then reorganizes the theory with a plain explanation. Sought first to provide a literal comprehension of SVR-SMO, the paper reforms the mathematical development with a framework of unified and non-interrupted derivations together with appropriate illustrations to visually clarify the key ideas. The development is also examined by an alternative viewpoint. The cross-examination achieves the foundation of the development more solid, and leads to a consistent suggestion of a straightforward generalized algorithm. Some consistent experimental results are also included.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (4)
- B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classi ers. In D. Haussler, editor, Proc. 5th Ann. ACM Workshop on COLT, pages 144{152, Pittsburgh, PA, July 1992. ACM Press.
- B. Sch olkopf, C. Burges, and V. Vapnik. Extracting support data for a given task. In U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference on Knowledge Discovery & Data Mining. AAAI Press, Menlo Park, CA, 1995.
- A. Smola, N. Murata, B. Sch olkopf, and K.-R. M uller. Asymptotically optimal choice of "-loss for support vector machines. ICANN'98.
- V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995.