I. Introduction
Over the past few years, deep convolutional neural networks (CNNs) have advanced the state-of-the-art of a variety of image restoration tasks including single image super-resolution (SISR) [1], inpainting [2], denoising [3], colorization [4], etc. Most state-of-the-art solutions were trained with pairs of manually generated input images and their anticipated restoration outcomes, based on implicit assumptions about the degeneration process. Image degradations might be restricted to a presumed level throughout datasets, e.g., a pre-defined shape, size, or location for inpainting regions [5], [6], and a designated downsampling strategy from high-resolution images [1], [7], [8]. Such specifications of the input domain entail severe over-fitting in the obtained CNN models [9], [10]. That is, they can succeed when the assumptions are fulfilled and the test degradations are limited to such a particular level, but their performance is unassured in practical applications in which multiple degradations exist and more flexible restorations are required.