Loading [MathJax]/extensions/MathMenu.js
Zoom-to-Inpaint: Image Inpainting with High-Frequency Details | IEEE Conference Publication | IEEE Xplore

Zoom-to-Inpaint: Image Inpainting with High-Frequency Details


Abstract:

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details. In this paper...Show More

Abstract:

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details. In this paper, we propose applying super-resolution to coarsely reconstructed outputs, refining them at high resolution, and then downscaling the output to the original resolution. By introducing high-resolution images to the refinement network, our framework is able to reconstruct finer details that are usually smoothed out due to spectral bias – the tendency of neural networks to reconstruct low frequencies better than high frequencies. To assist training the refinement network on large upscaled holes, we propose a progressive learning technique in which the size of the missing regions increases as training progresses. Our zoom-in, refine and zoom-out strategy, combined with high-resolution supervision and progressive learning, constitutes a framework-agnostic approach for enhancing high-frequency details that can be applied to any CNN-based inpainting method. We provide qualitative and quantitative evaluations along with an ablation analysis to show the effectiveness of our approach. This seemingly simple, yet powerful approach, outperforms existing inpainting methods.
Date of Conference: 19-20 June 2022
Date Added to IEEE Xplore: 23 August 2022
ISBN Information:

ISSN Information:

Conference Location: New Orleans, LA, USA

Funding Agency:

No metrics found for this document.

1. Introduction

Image inpainting is a long-standing problem in computer vision and has many graphics applications. The goal of the problem is to fill in missing regions in a masked image, such that the output is a natural completion of the captured scene with (i) plausible semantics, and (ii) realistic details and textures. The latter can be achieved with traditional inpainting methods that copy patches of valid pixels, e.g., PatchMatch [3], thus preserving the textural statistics of the surrounding regions. Nevertheless, the inpainted results often lack semantic context and do not blend well with the rest of the image. With the advent of deep learning, inpainting neural networks are commonly trained in a self-supervised fashion, by generating random masks and applying them to the full image to produce masked images that are used as the network’s input. These networks are able to produce semantically plausible results thanks to abundant training data. However, the results often do not have realistic details and textures, presumably due to the finding of a spectral bias [36] in neural networks. That is, high-frequency details are difficult to learn as neural networks are biased towards learning low-frequency components. This is especially problematic when training neural networks for image restoration tasks such as image inpainting, because high-frequency details must be generated for realistic results.

Usage
Select a Year
2024

View as

Total usage sinceAug 2022:118
01234567JanFebMarAprMayJunJulAugSepOctNovDec136001004022
Year Total:19
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.