1 INTRODUCTION
Artificial neural network (NN) techniques seem to be very effective to identify a wide class of nonlinear systems especially when we have no information about the complete model, or even when the controlled plant is considered as a black box [1]. The NNs can be classified as static (feed forward) and dynamic (recurrent or differential) nets. In most of the recent publications [2]–[4] feed forward NNs deal with a class of global optimization problems. To do so, a learning rule is usually used to adjust the weights of a static NN by minimizing the identification error. Although static nets have been used successfully in many applications, the major disadvantages of their structure are the slow learning rate. It means that the weights do not utilize the information on the local NN structure as they update. Another deficiency is that static nets do not have memory, so their outputs are uniquely determined by the current inputs and weights, leads to a high sensitivity of the function approximation to the training data.