I. Introduction
NEURAL NETWORKS are capable of learning and reconstructing complex nonlinear mappings and have been widely studied by control researchers in the identification and design of control systems. The hypothesis space may be thought of as containing all control actions that an artificial neural network with a given architecture may, in principle, learn. Until recent years, a conventional artificial neural network would have a static architecture, i.e., the number of neurons and their connectivity patterns would be specified and fixed by the network's designer prior to training the network. The static structure has actually restricted the space of hypotheses that it can compute. In designing the network architecture, the possible control action space for certain control applications has to be taken into consideration. However, in most control problems, the perturbations acting on the system and the appropriate control actions to execute are not known a priori. Therefore, it is not possible to predetermine the network structure. In order to overcome the limitations introduced by a priori fixed hypothesis space, the neural networks should have to allow for modifications of their structure as a function of learning. In fact, many of the limitations encountered by neural networks are owing to a fixed architecture. It has been shown that training a fixed size network cannot be solved in polynomial time [42], even when it has only three hidden nodes [41]. This suggests fundamentally different learning properties of neural network models that allow structural changes during learning.