I. Introduction
Computer Aided Diagnosis (CAD) is an emerging and evolutionary research domain in diagnostic radiology. Medical imaging technique helps to create visual representations of the internal structure of human body for clinical analysis [1]–[7]. These CAD approaches serve as a ‘second opinion’ tool for the radiologists in decision making of life threatening diseases like breast cancer [8]–[14], brain tumors [15]–[16] and lungs cancer [17]. The complementary nature of medical imaging sensors of different modalities, (X -ray, Magnetic Resonance Imaging (MRI), Computed Tomography (CT)) has brought a great need of image fusion for the retrieval of relevant information from medical images. ‘Medical Image Fusion’ is the process of combining and merging complementary information into a single image from two or more source images to maximize the information content of images and minimize the distortion and artifacts in the resulting image [18]–[20]. The significance of fusion process is important for multimodal images as single modal medical images provides only specific information; thus it is not feasible to get all the requisite information from image generated by single modality [21]–[23]. To elaborate further, CT helps in accessing the extent of disease; yet it is limited in soft-tissue contrast, needed for differentiating tumors from scar tissues. On the other hand, MRI scores over CT in terms of soft tissue discrimination. This is necessary because the soft tissue contrast allows better visualization of tumors. This highlights the need towards the development of multimodality medical imaging sensors for extracting clinical information to explore the possibility of data reduction along with better visual representation [24]–[26]. In the past decades, several fusion algorithms varying from the traditional fusion algorithms like simple averaging and weighted averaging, maximum and minimum selection rule [27] have been proposed. With the advancement of research in this field, algorithms such as Intensity-Rue-Saturation (IRS) [28] and Brovey transform (BT) [29] have been used to fuse medical images. In the recent years multi-resolution approaches using Mallat [30], the a trous [31] transforms, contourlet [32]–[33] have been proposed for image fusion. Fusion approaches employing wavelets analysis include transforms such as SWT [34], LWT [35], MWT [35], RDWT [36], and complex wavelet [27]. Y. Luo et al. [37] used a combination of PCA with a trous wavelet transform which focused on the spatial and spectral resolutions. But, the technique did not laid emphasis on edge or shape detection, which are fundamental structures in natural images and particularly relevant from a visual point of view. Y. Yang et al. [38] proposed a fusion method based on window selection and the discrete wavelet transform. The technique emphasized on the treatment of the low and high frequency bands with different selection rule separately. The method proposed performed better than pixel averaging and conventional DWT with maximum selection rule, but has a limitation of reduced contrast in the fused image. D. A. Godse in their work [39] selected maximum pixel intensity approach along with wavelet to perform fusion. The said combination produced a focused image but the image suffered with blurring. On the other hand, work of R. Singh and A. Khare [40] conferred a method integrating Daubechies complex wavelet transform and weighted average rule for fusion but resulting in a highly blurred image. As, both the relevant and irrelevant information from the source images are included in the fused one. S. K. Sadhasivam in their work [41] applied PCA along with the selection of maximum pixel intensity to perform fusion. The method yielded an image with less structural similarity with the source images along with low contrast and luminescence. The above discussion incurred the desire to improve the quality of the fused image by removing the redundant information from the images. The reason behind choosing the complex wavelet based approach in comparison to other approaches is that they are localized in time and frequency and can be defined with specific time span. Hence, the complex wavelets preserve time and frequency information as well as provides shift invariance and better directionality, thereby yielding a suitable approach for medical image fusion. The proposed work therefore presents a combination of DTCWT and PCA as an improvement to the aforementioned limitations. The obtained results have been evaluated using entropy (E), fusion factor (FF) as fusion metrics; yielding satisfactory performance. The rest of the paper is organized as follows: The proposed fusion approach is discussed in section II. Section III presents experimental results and finally the paper is concluded in section IV.