I. Introduction
Neural networks (NNs) have an integrated ability of efficient computation and evolving intelligence that have stimulated a great deal of research endeavor onto enabling them for high-performance learning, control, and optimization [1]–[4]. During the past three decades, tremendous efforts have been devoted to basic theories and real applications of NNs [5]–[8]. Various types of NN architectures have been built on solid mathematical and engineering theories, as well as fundamental principles that govern biological neural systems, including recurrent NNs, cellular NNs, Hopfield NNs, impulsive NNs, and delayed NNs [9]–[11], [13], [18], [19]. For instance, Hopfield NNs are formulated and analyzed in [1]; stability of delayed Hopfield NNs is investigated in [6], [24], [34]; dynamical behaviors of delayed recurrent NNs are studied in [7], [8], [28], and [30].