Loading [MathJax]/extensions/MathZoom.js
Temporal Consistency Learning of Inter-Frames for Video Super-Resolution | IEEE Journals & Magazine | IEEE Xplore

Temporal Consistency Learning of Inter-Frames for Video Super-Resolution


Abstract:

Video super-resolution (VSR) is a task that aims to reconstruct high-resolution (HR) frames from the low-resolution (LR) reference frame and multiple neighboring frames. ...Show More

Abstract:

Video super-resolution (VSR) is a task that aims to reconstruct high-resolution (HR) frames from the low-resolution (LR) reference frame and multiple neighboring frames. The vital operation is to utilize the relative misaligned frames for the current frame reconstruction and preserve the consistency of the results. Existing methods generally explore information propagation and frame alignment to improve the performance of VSR. However, few studies focus on the temporal consistency of inter-frames. In this paper, we propose a Temporal Consistency learning Network (TCNet) for VSR in an end-to-end manner, to enhance the consistency of the reconstructed videos. A spatio-temporal stability module is designed to learn the self-alignment from inter-frames. Especially, the correlative matching is employed to exploit the spatial dependency from each frame to maintain structural stability. Moreover, a self-attention mechanism is utilized to learn the temporal correspondence to implement an adaptive warping operation for temporal consistency among multi-frames. Besides, a hybrid recurrent architecture is designed to leverage short-term and long-term information. We further present a progressive fusion module to perform a multistage fusion of spatio-temporal features. And the final reconstructed frames are refined by these fused features. Objective and subjective results of various experiments demonstrate that TCNet has superior performance on different benchmark datasets, compared to several state-of-the-art methods.
Page(s): 1507 - 1520
Date of Publication: 13 October 2022

ISSN Information:

Funding Agency:

Citations are not available for this document.

I. Introduction

Video Super-Resolution (VSR) is a challenging task which tries to learn the complementary information across video frames. Compared with Single Image Super-Resolution (SISR), VSR has to deal with a sequence, made up by temporally high-related but misaligned frames. In several previous works [1], [2], VSR was regarded as an extension of SISR where the time-series data were super-resolved by image super-resolution methods [3] frame by frame. Obviously, the performance is always not satisfactory as the temporal information fails to be well utilized.

Cites in Papers - |

Cites in Papers - IEEE (20)

Select All
1.
Linfeng He, Meiqin Liu, Qi Tang, Chao Yao, Yao Zhao, "DATA-VSR: Dynamic Trajectory Attention and Texture Adaptive Rooter for Video Super-Resolution", ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1-5, 2025.
2.
Laigan Luo, Benshun Yi, Zhongyuan Wang, Zheng He, Chao Zhu, "Dual Bidirectional Feature Enhancement Network for Continuous Space-Time Video Super-Resolution", IEEE Transactions on Computational Imaging, vol.11, pp.228-236, 2025.
3.
Jun Tang, Lele Niu, Linlin Liu, Hang Dai, Yong Ding, "VMG: Rethinking U-Net Architecture for Video Super-Resolution", IEEE Transactions on Broadcasting, vol.71, no.1, pp.334-349, 2025.
4.
Qiang Zhu, Feiyu Chen, Shuyuan Zhu, Yu Liu, Xue Zhou, Ruiqin Xiong, Bing Zeng, "DVSRNet: Deep Video Super-Resolution Based on Progressive Deformable Alignment and Temporal-Sparse Enhancement", IEEE Transactions on Neural Networks and Learning Systems, vol.36, no.2, pp.3258-3272, 2025.
5.
Qian Xu, Xiaobin Hu, Donghao Luo, Ying Tai, Chengjie Wang, Yuntao Qian, "Efficiently Exploiting Spatially Variant Knowledge for Video Deblurring", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.12, pp.12581-12593, 2024.
6.
Xuan Long, Meiqin Liu, Qi Tang, Chao Yao, Jian Jin, Yao Zhao, "Noisy-Residual Continuous Diffusion Models for Real Image Denoising", 2024 IEEE International Conference on Multimedia and Expo (ICME), pp.1-6, 2024.
7.
Xinyi Wu, Santiago López-Tapia, Xijun Wang, Rafael Molina, Aggelos K. Katsaggelos, "Real-Time Lightweight Video Super-Resolution With RRED-Based Perceptual Constraint", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.10, pp.10310-10325, 2024.
8.
Qiang Zhu, Feiyu Chen, Yu Liu, Shuyuan Zhu, Bing Zeng, "Deep Compressed Video Super-Resolution With Guidance of Coding Priors", IEEE Transactions on Broadcasting, vol.70, no.2, pp.505-515, 2024.
9.
Tao Qing, Zhichao Sha, Xueying Wang, Jing Wu, "Video Super-Resolution with Recurrent High and Low-Frequency Information Propagation", 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), pp.619-626, 2024.
10.
Dingyi Li, Yu Liu, Zengfu Wang, Jian Yang, "Video Rescaling With Recurrent Diffusion", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.10, pp.9386-9399, 2024.
11.
Guanchen Ding, Chang Wen Chen, "Towards Omniscient Feature Alignment for Video Rescaling", ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.4190-4194, 2024.
12.
Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool, "VRT: A Video Restoration Transformer", IEEE Transactions on Image Processing, vol.33, pp.2171-2182, 2024.
13.
Jingyi Wang, Huimin Lu, "Coarse-to-Fine Grained Alignment Video Super-Resolution for Underwater Camera", IEEE Transactions on Consumer Electronics, vol.70, no.1, pp.831-838, 2024.
14.
Jun Tang, Chenyan Lu, Zhengxue Liu, Jiale Li, Hang Dai, Yong Ding, "CTVSR: Collaborative Spatial–Temporal Transformer for Video Super-Resolution", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.6, pp.5018-5032, 2024.
15.
Hui Luo, Zhuangwei Zhuang, Yuanqing Li, Mingkui Tan, Cen Chen, Jianlin Zhang, "Toward Compact and Robust Model Learning Under Dynamically Perturbed Environments", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.6, pp.4857-4873, 2024.
16.
Sheng Cen, Miao Zhang, Yifei Zhu, Jiangchuan Liu, "AdaDSR: Adaptive Configuration Optimization for Neural Enhanced Video Analytics Streaming", IEEE Internet of Things Journal, vol.11, no.7, pp.11919-11929, 2024.
17.
Yihao Huang, Felix Juefei-Xu, Qing Guo, Yang Liu, Geguang Pu, "Dodging DeepFake Detection via Implicit Spatial-Domain Notch Filtering", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.8, pp.6949-6962, 2024.
18.
Yi Xiao, Qiangqiang Yuan, Kui Jiang, Xianyu Jin, Jiang He, Liangpei Zhang, Chia-Wen Lin, "Local-Global Temporal Difference Learning for Satellite Video Super-Resolution", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.4, pp.2789-2802, 2024.
19.
Weikang Xue, Lihang Gao, Shuiyi Hu, Tianqi Yu, Jianling Hu, "FGBRSN: Flow-Guided Gated Bi-Directional Recurrent Separated Network for Video Super-Resolution", IEEE Access, vol.11, pp.103419-103430, 2023.
20.
Shuo Jin, Meiqin Liu, Yu Guo, Chao Yao, Mohammad S. Obaidat, "Multi-frame Correlated Representation Network for Video Super-Resolution", 2023 International Conference on Computer, Information and Telecommunication Systems (CITS), pp.01-07, 2023.

Cites in Papers - Other Publishers (2)

1.
Yunzuo Zhang, Yameng Liu, "Contextual Correspondence Matters: Bidirectional Graph Matching for\\xa0Video Summarization", Computer Vision – ECCV 2024, vol.15145, pp.300, 2025.
2.
Hongjun Liu, Chao Yao, Yalan Zhang, Xiaojuan Ban, "GestureTeach: A gesture guided online teaching interactive model", Computer Animation and Virtual Worlds, 2023.
Contact IEEE to Subscribe

References

References is not available for this document.