I. Introduction
Video surveillance of environments has intensively been studied in recent years [1], [2]. To increase the mobility of video surveillance systems, vehicles are used as the carriers of such systems [3]–[5]. Applications of video surveillance vehicles include the dynamic monitoring of outdoor events, detection of passing-by persons, assistance for safe driving, warning of dangerous activities, and watching of environmental changes. Various types of cameras were used to capture environment images. Gandhi and Trivedi [3] made a good survey of vehicle surround capture techniques and proposed a novel omnivideo-based approach to synthesize dynamic panoramic surround maps using the stereo and motion analysis of video images from a pair of omnicameras on a vehicle. Micheloni et al. [4] used an autonomous vehicle to monitor moving objects in indoor environments, whereas Chen and Tsai [5] designed an autonomous vehicle to monitor planar objects on walls in buildings, and both works used projective cameras to capture environment images. Onoe et al. [6] and Mituyosi et al. [7] used omnicameras for tracking human body features. A video surveillance system for localizing objects using multiple omnicameras was proposed by Morita et al. [8]. Some related works that use pairs of omnicameras with hyperboloidal-shaped reflective mirrors can be found in [9] and [10]. In particular, Koyasu et al. [9] proposed an omnidirectional stereo system that consists of two vertically aligned omnicameras to detect and track obstacles. In addition, Ukida et al. [10] used a similar system and a space encoding scheme to acquire 3-D environment data for various applications. Furthermore, a method that reconstructs the 3-D data of static nearby vehicles by a mobile robot using a stereo omnicamera (a two-mirror omni-imaging system) was proposed by Meguro et al. [11]. Many more related techniques can be found in [15]–[27], which will be reviewed after presenting the method that is proposed in this paper.