I. Introduction
Resistive random access memory (ReRAM) [1]–[3] is one of the most promising devices in which a crossbar structure of ReRAM improves the efficiency of vector-matrix multiplication (VMM) by enabling parallel products and summations of currents flowing through ReRAM devices. In particular, a ReRAM-based VMM is suitable for a convolutional neural network (CNN) that requires massive matrix operations during training and inference. The in-memory computing architecture and tunable analog resistance of ReRAM enable power-efficient VMM and training with a highly integrated memory architecture. For these reasons, various types of CNN hardware with ReRAM-based VMM accelerators have been proposed [4]–[6] and their effectiveness has been demonstrated through experiments.