A 0.23mW Heterogeneous Deep-Learning Processor Supporting Dynamic Execution of Conditional Neural Networks | IEEE Conference Publication | IEEE Xplore

A 0.23mW Heterogeneous Deep-Learning Processor Supporting Dynamic Execution of Conditional Neural Networks


Abstract:

A deep-learning processor is presented for achieving ultra-low-power operation in mobile applications. Using a heterogeneous architecture that includes a low-power always...Show More

Abstract:

A deep-learning processor is presented for achieving ultra-low-power operation in mobile applications. Using a heterogeneous architecture that includes a low-power always-on front-end and a selectively-enabled high-performance backend, the processor dynamically adjusts computational resources at runtime to support conditional execution in neural networks and meet performance targets with increased energy efficiency. Featuring a reconfigurable datapath and a memory architecture optimized for energy efficiency, the processor supports multilevel dynamic activation of neural network segments, performing object detection tasks with 5.3× lower energy consumption in comparison with a static baseline design. Fabricated in 40nm CMOS, the processor test-chip dissipates 0.23m W at 5.3 fps. It demonstrates energy scalability up to 28.6 TOPS/W and can be configured to run a variety of workloads, including severely-power-constrained ones such as always-on monitoring in mobile applications.
Date of Conference: 03-06 September 2018
Date Added to IEEE Xplore: 18 October 2018
ISBN Information:
Print on Demand(PoD) ISSN: 1930-8833
Conference Location: Dresden, Germany
Citations are not available for this document.

I. Introduction

Deep neural networks (DNNs) are essential elements of Artificial Intelligence (AI) systems. With AI capabilities increasingly moving from data centers to embedded and mobile platforms, there is growing need for energy-efficient DNN hardware designs capable of supporting always-on operation while meeting stringent power constraints.

Cites in Papers - |

Cites in Papers - IEEE (2)

Select All
1.
William Guicquero, Arnaud Verdant, "Algorithmic Enablers for Compact Neural Network Topology Hardware Design: Review and Trends", 2020 IEEE International Symposium on Circuits and Systems (ISCAS), pp.1-5, 2020.
2.
Minkyu Kim, Jae-Sun Seo, "Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access", 2020 IEEE Custom Integrated Circuits Conference (CICC), pp.1-4, 2020.
Contact IEEE to Subscribe

References

References is not available for this document.