1. Introduction
When a deep neural network is trained under a specific scenario, its generalization ability tends to be limited to that particular setting, and its performance deteriorates under a different condition. This is a major problem in single image super-resolution (SR), where most neural-network-based methods have focused on the upscaling of low resolution (LR) images to high resolution (HR) images solely under the bicubic downsampling setting [13], [15], [16], [26], until very recently. Naturally, their performance tends to severely drop if the input LR image is degraded by even a slightly different downsampling kernel, which is often the case in real images [23]. Hence, more recent SR methods aim for blind SR, where the true degradation kernels are unknown [5], [8].