Loading [MathJax]/extensions/MathZoom.js
A High-Performance Accelerator for Super-Resolution Processing on Embedded GPU | IEEE Journals & Magazine | IEEE Xplore

A High-Performance Accelerator for Super-Resolution Processing on Embedded GPU


Abstract:

Over the past few years, super-resolution (SR) processing has achieved astonishing progress along with the development of deep learning. Nevertheless, the rigorous requir...Show More

Abstract:

Over the past few years, super-resolution (SR) processing has achieved astonishing progress along with the development of deep learning. Nevertheless, the rigorous requirement for real-time inference, especially for video tasks, leaves a harsh challenge for both the model architecture design and the hardware-level implementation. In this article, we propose a hardware-aware acceleration on embedded GPU devices as a full-stack SR deployment framework. The most critical stage with dictionary learning applied in SR flow was analyzed in details and optimized with a tailored dictionary slimming strategy. Moreover, we also delve into the programming architecture of hardware while analyzing the model structure to optimize the computation kernels to reduce inference latency and maximize the throughput given restricted computing power. In addition, we further accelerate the model with 8-bit integer inference by quantizing the weights in the compressed model. An adaptive 8-bit quantization flow for SR task enables the quantized model to achieve a comparable result with the full-precision baselines. With the help of our approaches, the computation and communication bottlenecks in the deep dictionary learning-based SR models can be overcome effectively. The experiments on both edge embedded device NVIDIA NX and 2080Ti prove that our framework exceeds the performance of state-of-the-art NVIDIA TensorRT significantly and can achieve real-time performance.
Page(s): 3210 - 3223
Date of Publication: 08 February 2023

ISSN Information:

Funding Agency:


I. Introduction

Super-resolution (SR) is an important class of graphical processing techniques that plays an important role in the digital image era. The SR task aims at generating or recovering high-resolution (HR) video frames given frames with low-resolution (LR). Among all existing approaches, the naive solution is to interpolate the LR image with RGB value collected bilinear or bicubic from spatially invariant nearest-neighbor pixels. Advanced development of deep learning in computer vision has stimulated a group of powerful SR approaches with impressive performance for SR. From conventional convolution neural networks [2] to novel generative adversarial networks [3], [4], various methods have appeared in the last decade. Recently, by introducing dictionary learning methods with pixel-level local feature fusion operations [5], [6], the image quality of generated HR images or videos is further improved with richer color/texture details recovered thanks to the idea of dictionary learning and pixel-level local feature fuse operations. As algorithms get performant, the efficient and optimized deployment of such deep learning-based SR methods on hardware has gradually become the new spot of attention.

Contact IEEE to Subscribe

References

References is not available for this document.