I. Introduction
Interpretation of multisensor remote sensing images is in full evolution, allowing the generation of up-to-date land cover information. Due to the development of various image sensors (visible, infrared, synthetic aperture radar (SAR), etc.), interpretation of the scene can be done through the fusion of data provided by these sensors (multisensor fusion) [1]. However, the interpretation process is generally characterized by numerous types of imperfection [2]. Therefore, the problem of managing imprecise and uncertain data is growing; in fact, imprecision and uncertainty are becoming more complex in multisensor fusion [1]. Interpretation systems should be able to deal with this kind of information [3]. We have to choose between two possible solutions: either representing uncertainty and imprecision or working with uncertain and imprecise information. In this paper, we present the three most commonly used mathematical frameworks that try to overcome the problem of imperfection accompanying the image interpretation process, i.e., the probability, possibility, and evidence theories [1]. Each of these three models has its own operations to combine and process information and is more appropriate for a specific situation of given information and for a particular type of imperfection [4]. The most commonly used image interpretation systems do not take into account the imperfection accompanying these images, and the few systems that do use only one theory with very restricted parameters [3].