I. Introduction
From a mathematical point of view, research on the approximation capabilities of feedforward neural networks has focused on two aspects: universal approximation on compact input sets and approximation in a finite set. Many researchers have explored the universal approximation capabilities of standard multi-layer feedforward neural networks[1], [2], [3]. In real applications, the neural networks are trained in finite training set. For function approximation in a finite training set, Huang and Babri[4] shows that a sigle-hidden layer feedforward neural network (SLFN) with at most hidden neurons and with almost any nonlinear activation function can learn distinct observations with zero error. It should be noted that the input weights (linking the input layer to the first hidden layer) need to be adjusted in all these previous theoretical research works as well as in almost all practical learning algorithms of feedforward neural networks.