1. Introduction
Since vapnik proposed SVM (Support Vector Machine), this new tool of machine learning has been applied increasingly in pattern recognition, non-linear prediction and control, etc. SVM is based on statistical learning theory, trying to minimize the structural risk to improve the generalizing ability of learning-machine. Classical SVM theory translates a machine learning problem into a solution of quadratic programming, which is a iterated process and each step calculation will become complex greatly once there being a great deal of examples. At present most research works focus on how to reduce computing complexity of solving quadratic programming problem, the influential one is famous SMO (sequential minimal optimal) algorithm proposed in Ref. [2]. In addition, Ref. [3] [4] improve SMO by reducing work set; Ref. [5] decrease calculation amount by decomposing Lagrange Multipliers which reach the upper bound; Ref. [6] amend SMO algorithm by optimising in groups. Different with SMO arithmetic, Ref. [7] advances a new interior point optimising method based on decomposing kernel matrix into lower rank matrixes. Based on the adoption of -norm in structural risk, linear programming SVM is presented firstly in Ref. [8], which turns a problem of machine learning into solving linear programming problem. It is important and significant that only finite steps are needed to solve linear programming problem by means of simplex algorithm, but its method is difficult to generalize to solve regressive learning problem such as function estimation and so on.