I. Introduction
The recovery of an image with high resolution (HR) using several low-resolution (LR) versions is the goal of the extensively researched problem of super-resolution (SR) in image processing (Yang et al., 2014). A great deal of research has been put into this area, using either conventional method for image processing or quickly developing deep learning-based techniques [1], [2]. Super-resolution imaging is generally challenging due to limited information at hand and the loss of details, rendering it an ill-posed problem. The two primary categories into which SR algorithms can be divided are SISR- single image super resolution (Tang and Chen; Tsurusaki et al.; Cheng et al.), which aims to retrieve the original information from single image, and multiframe SR (Hung and Siu, 2009; Btz et al., 2016), a conventional approach that uses information derived from several frames. SISR methods can be separated into methodologies based on learning and interpolation. In the present article, we concentrate on the use of deep learning for super-resolution of omnidirectional image.