Loading [MathJax]/extensions/MathMenu.js
Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware | IEEE Conference Publication | IEEE Xplore

Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware


Abstract:

Energy-constrained neural network processing is in high demanded for various mobile applications. Binary neural network aggressively enhances the computational efficiency...Show More

Abstract:

Energy-constrained neural network processing is in high demanded for various mobile applications. Binary neural network aggressively enhances the computational efficiency, and in contrast, it suffers from degradation of accuracy due to its extreme approximation. We propose a novel accurate neural network model based on binarization and "dithering" that distributes the quantization error to neighboring pixels. The quantization errors in the binarization are distributed in the plane, so that a pixel in the multi-level source expression more accurately represented in the resulting binarized plane by multiple pixels. We designed a low-overhead binary-based hardware architecture for the proposed model. The evaluation results show that this method can be realized with a few additional lightweight hardware components.
Date of Conference: 10-14 December 2018
Date Added to IEEE Xplore: 20 June 2019
ISBN Information:
Conference Location: Naha, Japan

I. Introduction

Recently, neural network technology has been widely explored and adopted due to its high generalization capability, enabling applications like speech recognition, self-driving cars, smart home devices, etc. The major problem with these neural network applications is that extensive computation is required, which in turn necessitates power consumption and memory occupation, and which is not negligible especially for mobile or IoT use. For example, a convolutional neural network (CNN) model used for image recognition (VGG16 [1]) employs 16 layers, requiring 15 billion multiply-accumulation (MAC) operations and 277 MB weight memory at 16-bit expression.

References

References is not available for this document.