I. Introduction
Human face identification has always been a difficult task to deal with. It has a vast application domain that covers some of the critical areas like security systems, defense applications, intelligent machines etc. It involves different image processing issues like face detection, feature extraction and identification [1], [2], [3]. Visual imagery was broadly used in face identification systems, but these are very sensitive to illumination changes. This limitation has been overcome by the Infrared (IR) spectrum that provides simpler and more robust solution to boost the identification performance in uncontrolled environments and deliberate attempts to obscure identity. But IR imagery is sensitive to temperature changes in the surrounding environment and variations in the heat patterns of the face and it is opaque to glass. All these facts degrade the face identification efficiency. This drove us to fuse information from both visual and thermal spectra, which have the potential to improve face identification performance. Image fusion is the concept of combining multiple images into composite products, through which more information than that of individual input images can be revealed [4]. The goal of image fusion is to integrate complementary multisensor, multi temporal and/or multi view data into a new image containing more information with some sort of panoptic view of the face image free from all the disturbances. With the availability of multiple image sources, image fusion has emerged as a new and promising research area.