I. Introduction
THE kernel-based support vector (SV) learning was originally proposed and developed for nonlinear pattern recognition. As the cornerstone of nonlinear SV learning, the kernel functions play an essential role in providing a general framework to represent data. The use of kernel functions allows high-dimensional inner-product computations to be performed with very little overhead and brings all the benefits of linear estimation. Its simplicity and ease of application make it attractive to practitioners. However, the SV learning for approximating real-valued functions is more delicate than the approximation of indicator functions for pattern recognition. Various real-valued function estimation problems need various sets of approximating functions due to the complexity of the dependencies. The kernel determines the class of functions a SVM can draw its solution from. The choice of kernel significantly affects the performance of a SVM [1]–[2]. Therefore it is important to construct special kernels that reflect special properties of approximating functions [3]–[4].