I. Introduction
Robots are being increasingly used to aid in disaster response [1]; however, the state-of-art robotic systems lack the required autonomy that would enable them to be deployed in an unknown environment and search for survivors [2]. One of the challenges to overcome is the lack of active perception methodologies that enable the robot to sense, think, and act autonomously. Methods exist to map the interior of a room or rooms using depth sensors or lasers [3], [4]. However, these methods do not reason about multimodal sensors and maps. This paper presents a framework that extends the occupancy grid map formulation to incorporate the conditional dependence that (in this case) arises in spatial and thermal modalities. To this end, temperature values are incorporated into the map if the value is associated with current or prior depth information. The Conditional Mutual Information (CMI) is employed to quantify the information gain between the multimodal sensors and map. In addition to search and rescue applications the proposed methodology is relevant to a wide range of domains including planetary pit and cave exploration, robotic modeling of infrastructure such as bridges, and gas detection in abandoned mines.