Loading [MathJax]/extensions/MathMenu.js
Improved EDVR Model for Robust and Efficient Video Super-Resolution | IEEE Conference Publication | IEEE Xplore

Improved EDVR Model for Robust and Efficient Video Super-Resolution


Abstract:

Computer vision technologies are increasingly commonly used in daily life, and video super-resolution is gradually drawing more attention in the computer vision community...Show More

Abstract:

Computer vision technologies are increasingly commonly used in daily life, and video super-resolution is gradually drawing more attention in the computer vision community. In this work, we propose an improved EDVR model to tackle the robustness and efficiency problems of the original EDVR model in video super-resolution. First, to handle the blurring situations and emphasize the effective features, we devise a preprocessing module consisting of rigid convolution sub-modules and feature enhancement sub-modules, which are flexible and effective. Second, we devise a temporal 3D convolutional fusion module, which can extract information in image frames more accurately and rapidly. Third, to better utilize the information in feature maps, we design a new reconstruction block by introducing a new channel attention approach. Moreover, we use multiple programmatic methods to accelerate the model training and inference process, making the model useful for practical applications.
Date of Conference: 04-08 January 2022
Date Added to IEEE Xplore: 15 February 2022
ISBN Information:

ISSN Information:

Conference Location: Waikoloa, HI, USA

Funding Agency:


I. Introduction

Nowadays, computer vision technology plays an important role in research and industry communities, but high-resolution information is usually not easy to obtain, especially for videos. Hence, video super-resolution is a good solution. However, video super-resolution algorithms are facing two challenges. On the one hand, the accuracy is not so satisfactory. On the other hand, many video applications require high-speed models, even real-time models. Traditional algorithms, such as bicubic interpolation and bilinear interpolation, cannot gain the ideal output, while machine learning-based algorithms can get better results than previous methods, but usually at the cost of time-consuming training and enormous model parameters.

Contact IEEE to Subscribe

References

References is not available for this document.