I. Introduction
In order to resolve the problem of walking or driving in the dark environment, we may make use of visual aid (including small devices such as mobile phones), which is more easily or commonly available as compared with dedicated approaches such as infrared detection. Since each video composes of a sequence of images frames. Let us start with the problem of the image with bad lightening conditions. Images captured with insufficient illumination conditions usually have bad visual quality, such as low contrast, dim color, etc. Information among the low-light images faces substantial degradation that reduces its utility value. There could be possible solutions for taking photos in the low-light conditions, such as, using flash, increasing the sensitivity of the camera sensor (ISO) and taking photos with longer exposure time. However, these solutions have significant limitations: Flash may not be allowed in some public place, like cinema, museum, exhibition, etc. Higher camera sensitivity often brings noticeable noise in the dark regions. Longer exposure time is impractical for video capturing. Burst processing takes multiple low-light images under different exposures at a short time, and then combines them to obtain a large dynamic range. However, it cannot be generalized for enhancing low-light videos. Hence, it may be inevitable to obtain low-visibility images in low-light conditions. Enhancing such low-light images not only gives us a better visual quality but also gives benefit to vision-based systems, like autonomous driving, vision-based place recognition, etc.
Effect of our proposed method (the green and yellow circles show the significant improvement)