I. Introduction
PANSHARPENING refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image simultaneously acquired over the same area. This can be seen as a particular problem of data fusion since one would aim at combining the spatial details resolved by the PAN (but not present in the MS) and the several spectral bands of the MS image (against the single band of the PAN) in a unique product. With respect to the general problem of multisensor fusion, pansharpening may not require the challenging phase of spatial coregistration, since typically images are simultaneously captured, being the sensors acquiring the PAN and the MS both mounted on the same platform [1]. Nowadays, PAN and MS images can be obtained in bundle by several commercial optical satellites, such as IKONOS, Geo-Eye, OrbView, Landsat, SPOT, QuickBird, WorldView, and Pléiades. The spatial resolution is even below half a meter for the PAN (for the commercial satellite product with the highest spatial resolution), and the spectral resolution can be up to eight bands captured in the visible and near-infrared wavelengths for the MS product. The fusion of the PAN and MS images constitutes the sole possibility for achieving images with the highest resolutions in both the spatial and spectral domains. In fact, physical constraints preclude this goal from being achieved by using a single sensor. The demand for pansharpened data is continuously growing, due to the increasing availability of commercial products using high-resolution images, e.g., Google Earth and Bing Maps. Furthermore, pansharpening constitutes an important preliminary step for enhancing images for many remote sensing tasks, such as change detection [2], object recog nition [3], visual image analysis, and scene interpretation [4].