Loading [MathJax]/extensions/MathZoom.js
Fusing Attention Network Based on Dilated Convolution for Superresolution | IEEE Journals & Magazine | IEEE Xplore

Fusing Attention Network Based on Dilated Convolution for Superresolution


Abstract:

Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the ...Show More

Abstract:

Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context information of the low-resolution image. To solve this problem, we propose a fusing attention network based on dilated convolution (DFAN) for SR. Specifically, we first propose a dilated convolutional attention module (DCAM), which captures multiscale contextual information from different regions of LR images by locking multiple regions with different sizes of receptive fields. Then, we propose a multifeature attention block (MFAB), further focus on high-frequency components of multiscale contextual information, and extract more high-frequency features. Experimental results demonstrate that the proposed DFAN achieves performance improvements in terms of visual quality evaluation and quantitative evaluation.
Published in: IEEE Transactions on Cognitive and Developmental Systems ( Volume: 15, Issue: 1, March 2023)
Page(s): 234 - 241
Date of Publication: 23 February 2022

ISSN Information:

Funding Agency:

References is not available for this document.

Getting results...

Contact IEEE to Subscribe

References

References is not available for this document.