Energy-efficient ConvNets through approximate computing | IEEE Conference Publication | IEEE Xplore

Energy-efficient ConvNets through approximate computing


Abstract:

Recently convolutional neural networks (ConvNets) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual det...Show More

Abstract:

Recently convolutional neural networks (ConvNets) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual detection. However, ConvNet algorithms are typically very computation and memory intensive. In order to be able to embed ConvNet-based classification into wearable platforms and embedded systems such as smartphones or ubiquitous electronics for the internet-of-things, their energy consumption should be reduced drastically. This paper proposes methods based on approximate computing to reduce energy consumption in state-of-the-art ConvNet accelerators. By combining techniques both at the system- and circuit level, we can gain energy in the systems arithmetic: up to 30× without losing classification accuracy and more than 100× at 99% classification accuracy, compared to the commonly used 16-bit fixed point number format.
Date of Conference: 07-10 March 2016
Date Added to IEEE Xplore: 26 May 2016
ISBN Information:
Conference Location: Lake Placid, NY, USA

1. Introduction

Recently neural networks have made an impressive comeback in the field of machine learning. Convolutional neural networks or ConvNets are consistently pushing the state-of-the-art in areas like computer vision and speech processing. One of the reasons for this revival is the increasing availability of computing power. Multicore CPU's, GPU's, and even clusters of GPU's are no longer prohibitively expensive and make it possible to train and evaluate larger networks.

References

References is not available for this document.