Loading [MathJax]/extensions/MathZoom.js
Drop-Connect as a Fault-Tolerance Approach for RRAM-based Deep Neural Network Accelerators | IEEE Conference Publication | IEEE Xplore

Drop-Connect as a Fault-Tolerance Approach for RRAM-based Deep Neural Network Accelerators


Abstract:

Resistive random-access memory (RRAM) is widely recognized as a promising emerging hardware platform for deep neural networks (DNNs). Yet, due to manufacturing limitation...Show More

Abstract:

Resistive random-access memory (RRAM) is widely recognized as a promising emerging hardware platform for deep neural networks (DNNs). Yet, due to manufacturing limitations, current RRAM devices are highly susceptible to hardware defects, which poses a significant challenge to their practical applicability. In this paper, we present a machine learning technique that enables the deployment of defect-prone RRAM accelerators for DNN applications, without necessitating modifying the hardware, retraining of the neural network, or implementing additional detection circuitry/logic. The key idea involves incorporating a drop-connect inspired approach during the training phase of a DNN, where random subsets of weights are selected to emulate fault effects (e.g., set to zero to mimic stuck-at-1 faults), thereby equipping the DNN with the ability to learn and adapt to RRAM defects with the corresponding fault rates. Our results demonstrate the viability of the dropconnect approach, coupled with various algorithm and system-level design and trade-off considerations. We show that, even in the presence of high defect rates (e.g., up to 30%), the degradation of DNN accuracy can be as low as less than 1% compared to that of the fault-free version, while incurring minimal system-level runtime/energy costs.
Date of Conference: 22-24 April 2024
Date Added to IEEE Xplore: 29 May 2024
ISBN Information:

ISSN Information:

Conference Location: Tempe, AZ, USA
No metrics found for this document.

I. Introduction

Recent advancements in Deep Neural Networks (DNNs) have demonstrated significant success across various applications. However, the increasing complexity and capabilities of DNNs necessitate substantial computational power and memory bandwidth in conventional Von Neumann architectures to accelerate DNN applications. A promising alternative lies in the utilization of novel architectures constructed with emerging technologies. Among the various options, the Resistive RAM (RRAM) crossbar-based architecture, comprised of memristor cells [1], emerges as an innovative compute-in-memory solution that not only reduces power consumption, but also boosts processing speeds. Illustrated in Fig. 1 is a standard RRAM crossbar. Within the crossbar, DNN kernels are unfolded and embedded with each memristor cells, each of which retaining a single weight value, while input data is continuously streamed into the crossbar from its wordlines. The analog nature of this architecture makes it well-suited for vector-matrix multiplication, as the dot product operation can be replicated using Kirchhoff’s circuit law.

Usage
Select a Year
2025

View as

Total usage sinceMay 2024:163
051015202530JanFebMarAprMayJunJulAugSepOctNovDec81523000000000
Year Total:46
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.