1. Introduction
Single image super-resolution (SR) is a key challenge in computer vision and image processing, aiming to reconstruct a high-resolution image from a low-resolution input. Effective super-resolution aims to improve the efficiency of the SR model while maintaining reconstruction performance. Since the introduction of deep learning into super-resolution tasks [9], many CNN-based methods have been proposed [8], [10], [11], [25], [26], [29], [34] to improve the performance. A series of approaches [10], [17], [19], [25], [26], [28], [30], [36], [56] have been proposed for building efficient models for image SR. The majority of these efficient models focus on five factors: runtime, parameters, FLOPS, activations, and depths. To further promote the development of efficient SR, ICCV holds the first competition in the AIM 2019 challenge [60]. The information multi-distillation network(IMDN) [19] proposes cascaded information multi-distillation blocks to improve the feature extraction module, which won first place in this competition. After that, The winning solution of the AIM 2020 challenge [61], residual feature distillation network(RFDN) [36], further improves the IMDN by residual learning in the main block. In the efficient SR track of NTIRE 2022 [24] challenge, the winning solution, residual local feature network(RLFN) [28], removes the hierarchical distillation connection of residual feature distillation block(RFDB) [36] to reduce the inference time. In the efficient SR track of NTIRE 2022 [56] challenge, the winning solution utilizes a multi-stage lightweight training strategy that combines distillation and pruning to reduce both time consumption and model size.