I. Introduction
ACCURATE perception of the world is crucial for intelligent vehicles. For the task of the vehicle driving down a road, the sensor system should be able to identify the drivable road regions and obstacles as well as their characteristics, such as the size, position, direction, speed, etc. For some higher-level functions, it is also necessary to detect some specific features of interest, such as curbs, vehicles, pedestrians, etc. It has been studied for several decades. A number of approaches focused on the use of vision exclusively [1]–[3], whereas others utilized laser range finders [4]–[11] sometimes in combination with vision [12]. The fusion of the range and vision data allows a richer description of the world. The three-dimensional analysis can be obtained from LIDAR, while a color camera may provide the best data to identify the obstacles. Therefore, how to build consistent and efficient 2D representations out of 3D range data is important for the sensor fusion as well as the autonomous driving. In this paper, we address a graph-based approach for 2D road representation of 3D point clouds. The range data is acquired by a Velodyne HDL-64E sensor, which is mounted on the top of the vehicle through a rectangular roof rack, as shown in Fig. 1(a). Our objective is to develop a system that can robustly detect the accurate drivable road regions and obstacle regions. Such a system should be able to handle a variety of practical challenges, such as sloped terrains, rough road surfaces, rolling/pitching of the host vehicle, etc., and work in real-time. (a) The Velodyne HDL-64E sensor used in the experiments. (b) The flow diagram of the proposed approach.