Loading [MathJax]/extensions/MathMenu.js
Zoom-to-Inpaint: Image Inpainting with High-Frequency Details | IEEE Conference Publication | IEEE Xplore

Zoom-to-Inpaint: Image Inpainting with High-Frequency Details


Abstract:

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details. In this paper...Show More

Abstract:

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details. In this paper, we propose applying super-resolution to coarsely reconstructed outputs, refining them at high resolution, and then downscaling the output to the original resolution. By introducing high-resolution images to the refinement network, our framework is able to reconstruct finer details that are usually smoothed out due to spectral bias – the tendency of neural networks to reconstruct low frequencies better than high frequencies. To assist training the refinement network on large upscaled holes, we propose a progressive learning technique in which the size of the missing regions increases as training progresses. Our zoom-in, refine and zoom-out strategy, combined with high-resolution supervision and progressive learning, constitutes a framework-agnostic approach for enhancing high-frequency details that can be applied to any CNN-based inpainting method. We provide qualitative and quantitative evaluations along with an ablation analysis to show the effectiveness of our approach. This seemingly simple, yet powerful approach, outperforms existing inpainting methods.
Date of Conference: 19-20 June 2022
Date Added to IEEE Xplore: 23 August 2022
ISBN Information:

ISSN Information:

Conference Location: New Orleans, LA, USA

Funding Agency:

Citations are not available for this document.

1. Introduction

Image inpainting is a long-standing problem in computer vision and has many graphics applications. The goal of the problem is to fill in missing regions in a masked image, such that the output is a natural completion of the captured scene with (i) plausible semantics, and (ii) realistic details and textures. The latter can be achieved with traditional inpainting methods that copy patches of valid pixels, e.g., PatchMatch [3], thus preserving the textural statistics of the surrounding regions. Nevertheless, the inpainted results often lack semantic context and do not blend well with the rest of the image. With the advent of deep learning, inpainting neural networks are commonly trained in a self-supervised fashion, by generating random masks and applying them to the full image to produce masked images that are used as the network’s input. These networks are able to produce semantically plausible results thanks to abundant training data. However, the results often do not have realistic details and textures, presumably due to the finding of a spectral bias [36] in neural networks. That is, high-frequency details are difficult to learn as neural networks are biased towards learning low-frequency components. This is especially problematic when training neural networks for image restoration tasks such as image inpainting, because high-frequency details must be generated for realistic results.

Cites in Papers - |

Cites in Papers - IEEE (6)

Select All
1.
Jing Yang, Nur Intan Raihana Ruhaiyem, "Review of Deep Learning-Based Image Inpainting Techniques", IEEE Access, vol.12, pp.138441-138482, 2024.
2.
Miao Tian, Rui Chen, Pan Luo, Huinan Zhang, Zhengkun Qin, "Self-Supervised Contour-Aware Data Reconstruction Network for AMSU-A Brightness Temperature Data Records", IEEE Transactions on Geoscience and Remote Sensing, vol.62, pp.1-14, 2024.
3.
Ashwani Barbadaker, Mohish Khadse, Shjjad Shaikh, Sakshi Kulkarni, "Generation of Royalty-Free Images with NLP-Based Text Encoding and Generative adversarial networks", 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp.1-6, 2023.
4.
Zhiliang Wu, Hanyu Xuan, Changchang Sun, Weili Guan, Kang Zhang, Yan Yan, "Semi-Supervised Video Inpainting with Cycle Consistency Constraints", 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.22586-22595, 2023.
5.
Shruti S. Phutke, Ashutosh Kulkarni, Santosh Kumar Vipparthi, Subrahmanyam Murala, "Blind Image Inpainting via Omni-dimensional Gated Attention and Wavelet Queries", 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.1251-1260, 2023.
6.
Feng Li, Anqi Li, Jia Qin, Huihui Bai, Weisi Lin, Runmin Cong, Yao Zhao, "SRInpaintor: When Super-Resolution Meets Transformer for Image Inpainting", IEEE Transactions on Computational Imaging, vol.8, pp.743-758, 2022.

Cites in Papers - Other Publishers (6)

1.
Chenliang Zhou, Fangcheng Zhong, Param Hanji, Zhilin Guo, Kyle Fogarty, Alejandro Sztrajman, Hongyun Gao, Cengiz Oztireli, "FrePolad: Frequency-Rectified Point Latent Diffusion for\\xa0Point Cloud Generation", Computer Vision – ECCV 2024, vol.15125, pp.434, 2025.
2.
Weize Quan, Jiaxi Chen, Yanli Liu, Dong-Ming Yan, Peter Wonka, "Deep Learning-Based Image and Video Inpainting: A Survey", International Journal of Computer Vision, 2024.
3.
Zhiguang Yang, Yazhou Zhang, Xinpeng Zhang, Hanzhou Wu, "Robust and high-fidelity image watermarking with mutual mapping and antispectral aliasing", Journal of Electronic Imaging, vol.33, no.02, 2024.
4.
Hongyue Xiang, Weidong Min, Zitai Wei, Meng Zhu, Mengxue Liu, Ziyang Deng, "Image inpainting network based on multi?level attention mechanism", IET Image Processing, 2023.
5.
Zishan Xu, Xiaofeng Zhang, Wei Chen, Minda Yao, Jueting Liu, Tingting Xu, Zehua Wang, "A Review of Image Inpainting Methods Based on Deep Learning", Applied Sciences, vol.13, no.20, pp.11189, 2023.
6.
Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, Mohammad Norouzi, "Palette: Image-to-Image Diffusion Models", Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, pp.1, 2022.
Contact IEEE to Subscribe

References

References is not available for this document.