I. Introduction
The availability of big data has made it possible to achieve impressive human-like performance with deep neural networks (DNNs) [1]. However, training such networks on a von Neumann-based system where the computational unit and memory are physically separated is significantly slow and energy consuming due to the need to transfer the data between the two units. Recently, in-memory computing with memristive devices as synapses has shown promise for accelerating DNN training [2] –[4]. The synaptic weights can be represented by the device conductance. When arranged in a crossbar topology, all steps of DNN training - forward propagation, backpropagation, and weight update - can be performed in only O (1) time complexity in place, without the need to transfer any data [5].