I. Introduction
Video Super-resolution is a classic problem in video processing that addresses the question of how to reconstruct a high resolution (HR) frame from its downscaled low-resolution (LR) version. The temporal relationships in the input can be used to improve reconstruction for video super-resolution and combine the information from as many LR frames as possible to reach the best video super resolution results. This paper mainly focuses on video super resolution through generative adversarial network. In general, the existing SR approaches can be divided into two categories: Frequency domain based SR methods and Spatial domain based SR methods.