Abstract:
Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the ...Show MoreMetadata
Abstract:
Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context information of the low-resolution image. To solve this problem, we propose a fusing attention network based on dilated convolution (DFAN) for SR. Specifically, we first propose a dilated convolutional attention module (DCAM), which captures multiscale contextual information from different regions of LR images by locking multiple regions with different sizes of receptive fields. Then, we propose a multifeature attention block (MFAB), further focus on high-frequency components of multiscale contextual information, and extract more high-frequency features. Experimental results demonstrate that the proposed DFAN achieves performance improvements in terms of visual quality evaluation and quantitative evaluation.
Published in: IEEE Transactions on Cognitive and Developmental Systems ( Volume: 15, Issue: 1, March 2023)
Funding Agency:
References is not available for this document.
Getting results...