I. Introduction
Inthe analyses and designs of dynamic systems, the local feedback ANN-Recurrent Neural Networks play an important and achieve great progress in a large range of fields[1], [2]. A kind of diagonal recurrent neural networks (DRNNs) is proposed by Ku C C and Lee K Y[3]. It is a simplified model of the fully connected recurrent neural network with one hidden layer, which is composed of self-recurrent neurons. Due to its dynamic characteristic and relatively simple architecture, DRNNs is a very useful tool for most real-time applications. To guarantee convergence and for faster learning, they used adaptive learning rates by introducing a Lyapunov function and developed a convergence theorem for the dynamic adaptive backpropagation algorithm. In the procedure of the proof, however, we assume that it is irrational to update part of the weights while the others are kept invariable. This is because the output error of the system is a function of all the weights but not some of them. So we present this paper to discuss the convergence theorem of DRNNs.