1 Introduction
Illumination normalization is an important task in the field of computer vision and pattern recognition. One of its most important applications is face recognition under varying illumination. It has been proven, both experimentally [1] and theoretically [48] that, in face recognition, variations caused by illumination are more significant than the inherent differences between individuals. Various methods have been proposed for face recognition, including Eigenface [42], Fisherface [5], Probablistic and Bayesian Matching [25], subspace LDA [49], Active Shape Model and Active Appearance Model [23], LFA [27], EBGM [45], and SVM [17]. Nevertheless, the performance of most existing algorithms is highly sensitive to illumination variation. To attack the problem of face recognition under varying illumination, several methods have been proposed. The predominant ones include the Illumination Cone methods [6], [14], spherical harmonic-based representations [4], [29], [47], quotient image-based approaches [35], [34], [43], and the correlation filter-based method [32]. However, not only are the performances of most of them still far from ideal, but many of these methods require knowledge either about the light source or a large volume of training data, which is not practical for most real world scenarios. Let's take some of the most recent methods as examples: Lee et al.'s nine points of light [24] method needs perfect alignment between different images, Savvides et al.'s Corefaces [32] needs several training images to reach perfect results, and the recognition rate of Wang et al.'s self quotient image [43] still has room for improvement.