I. Introduction
Due to simple architecture and learning, radial basis function (RBF) networks have become one of the most popular models in neural network. In the literature, RBF nets have been intensively studied with a lot of applications, e.g. pattern recognition [12], data-mining [9], and time series forecasting [5], [10]. In general, the structural complexity of a RBF network depends on the number of the hidden nodes which is further in proportion as the input dimension. Hence, effective dimension reduction of the net's input space can considerably decrease the network structural complexity, whereby speeding up the network's converging. speed. Traditionally, principle component analysis (PCA) is a prevalent statistical tool for input dimension reduction. The basic rule is to select first several principal components of the observations as the RBF inputs. Since the PCA technique only uses second-order statistics information, it renders the principal components de-correlated but not really independent. That is, some useful information in the non-principal components may be discarded as well during the dimension reduction process. Consequently,. the performance of the RBF network may become worse after PCA preprocess [6].