1. Introduction
In this paper, we propose a framework for generation of synthetic underwater images considering revised image formation model [1] and use the same to train conditional generative adversarial networks, towards restoration of degraded underwater images. Capturing of underwater scene, heavily relies on unmanned vehicles (UV) equipped with imaging sensors, to provide a high-resolution view of sea bed, corals and archaeological sites. Marine archaeologists use the remotely operated vehicle (ROV) to explore the ocean without physically being present in the ocean [12]. Recently, we observe considerable advancement in underwater scene capturing technologies. However, the aquatic environment still presents unique challenges, unlike the above-water environment. Due to light attenuation, absorption and scattering most of the underwater images lack in contrast and depict inaccurate colors. The attenuation of light in water varies with wavelength and depends on its distance, unlike the terrestrial images where attenuation is assumed to be spectrally uniform. Wavelength-dependent attenuation causes color distortion that increases with the distance of an object from the camera. This phenomenon causes underwater images to appear bluish or greenish in color, unlike the above water scene.