Loading [MathJax]/extensions/MathMenu.js
Linear Programming Regressive Support Vector Machine | IEEE Conference Publication | IEEE Xplore

Linear Programming Regressive Support Vector Machine


Abstract:

Based on the analysis of the general norm in structure risk to control model complexity for regressive problem, two kinds of linear programming support vector machine cor...Show More

Abstract:

Based on the analysis of the general norm in structure risk to control model complexity for regressive problem, two kinds of linear programming support vector machine corresponding to l1-norm and linfin-norm are presented including linear and nonlinear SVMs. A numerical experiment has been done for these two kinds of linear programming support vector machines and classic support vector machine by artificial data. Simulation results show that the generalization performance of this two kind linear programming SVM is similar to classic one, l1-SVM has less number of support vectors and faster learning speed, and learning result is not sensitive to learning parameters
Date of Conference: 13-16 August 2006
Date Added to IEEE Xplore: 04 March 2009
Print ISBN:1-4244-0061-9

ISSN Information:

Conference Location: Dalian, China

1. Introduction

Since vapnik proposed SVM (Support Vector Machine), this new tool of machine learning has been applied increasingly in pattern recognition, non-linear prediction and control, etc. SVM is based on statistical learning theory, trying to minimize the structural risk to improve the generalizing ability of learning-machine. Classical SVM theory translates a machine learning problem into a solution of quadratic programming, which is a iterated process and each step calculation will become complex greatly once there being a great deal of examples. At present most research works focus on how to reduce computing complexity of solving quadratic programming problem, the influential one is famous SMO (sequential minimal optimal) algorithm proposed in Ref. [2]. In addition, Ref. [3] [4] improve SMO by reducing work set; Ref. [5] decrease calculation amount by decomposing Lagrange Multipliers which reach the upper bound; Ref. [6] amend SMO algorithm by optimising in groups. Different with SMO arithmetic, Ref. [7] advances a new interior point optimising method based on decomposing kernel matrix into lower rank matrixes. Based on the adoption of -norm in structural risk, linear programming SVM is presented firstly in Ref. [8], which turns a problem of machine learning into solving linear programming problem. It is important and significant that only finite steps are needed to solve linear programming problem by means of simplex algorithm, but its method is difficult to generalize to solve regressive learning problem such as function estimation and so on.

Contact IEEE to Subscribe

References

References is not available for this document.