You Li - IEEE Xplore Author Profile

Showing 1-21 of 21 results

Results

Light detection and ranging (LiDAR) limitations in adverse weather (e.g., rain, fog, and snow) prevent adopting high-level autonomous vehicles in all weather conditions. Furthermore, collecting and annotating these sparse point clouds in adverse weather is often cumbersome, inefficient, and expensive. In this article, we propose a data-driven approach to statistically model the performance of a po...Show More
For vehicles to navigate autonomously, they need to perceive and understand their immediate surroundings. Currently, cameras are the preferred sensors, due to their high performance and relatively low-cost compared with other sensors like LiDARs and Radars. However, their performance is limited by inherent imaging constraints, a standard RGB camera may perform poorly in extreme conditions, includi...Show More
This letter proposes a novel method to fuse the asynchronized outputs from a rolling shutter camera and a spinning LiDAR mounted on a moving vehicle. Compared with traditional methods only relying on intrinsic/extrinsic calibration, the proposed method takes the ego-motion, rolling shutter distortion, and occlusion into the fusion model. In essence, the method estimates the temporal offset between...Show More
As a critical sensor for high-level autonomous vehicles, LiDAR’s limitations in adverse weather (e.g. rain, fog, snow, etc.) impede the deployment of self-driving cars in all weather conditions. However, studies in literature on LiDAR’s performance in harsh conditions are insufficient. In this paper, based on a dataset collected with a popular Near-InfraRed (NIR) ToF LiDAR in a well-controlled art...Show More
Camera-based end-to-end driving neural networks bring the promise of a low-cost system that maps camera images to driving control commands. These networks are appealing because they replace laborious hand engineered building blocks but their black-box nature makes them difficult to delve in case of failure. Recent works have shown the importance of using an explicit intermediate representation tha...Show More
By transmitting lasers and processing laser returns, LiDAR (light detection and ranging) perceives the surrounding environment through distance measurements. Because of high ranging accuracy, LiDAR is one of the most critical sensors in autonomous driving systems. Revolving around the 3D point clouds generated from LiDARs, plentiful algorithms have been developed for object detection/tracking, env...Show More
As a critical sensor for high-level autonomous vehicles, LiDAR's limitations in adverse weather (e.g. rain, fog, snow, etc.) impede the deployment of self-driving cars in all weather conditions. In this paper, we model the performance of a popular 903nm ToF LiDAR under various fog conditions based on a LiDAR dataset collected in a well-controlled artificial fog chamber. Specifically, a two-stage d...Show More
LiDARs are usually more accurate than cameras in distance measuring. Hence, there is strong interest to apply LiDARs in autonomous driving. Different existing approaches process the rich 3D point clouds for object detection, tracking and recognition. These methods generally require two initial steps: (1) filter points on the ground plane and (2) cluster non-ground points into objects. This paper p...Show More
Active sensors such as LiDARs (light detection and ranging) are popular in autonomous driving systems for perception and localization. Existing perception approaches process the rich 3D LiDAR point clouds for object detection, tracking and recognition. These methods generally require an initial segmentation procedure containing two steps: (1) filter points as ground and non-ground points, and (2) ...Show More
Autonomous vehicles rely on their perception systems to acquire information about their immediate surroundings. It is necessary to detect the presence of other vehicles, pedestrians, and other relevant entities. Safety concerns and the need for accurate estimations have led to the introduction of lidar systems to complement camera- or radar-based perception systems. This article presents a review ...Show More
Convolutional neural networks are the state of the art methods for semantic segmentation but their resource consumption hinders their usability for real-time mobile robotics applications. Recent works have focused on designing lightweight networks that require less resources, but their efficiency is accompanied with a drop in performance. In this work, we propose a pixel-wise weighting of the cros...Show More
In traditional LIDAR processing pipelines, a point-cloud is split into clusters, or objects, which are classified afterwards. This supposes that all the objects obtained by clustering belong to one of the classes that the classifier can recognize, which is hard to guarantee in practice. We thus propose an evidential end-to-end deep neural network to classify LIDAR objects. The system is capable of...Show More
This paper presents three different approaches to inject a location information in semantic segmentation Convolutional Neural Networks (CNN) applied to urban scenes. The assumption that a location information would improve semantic segmentation performance emerges from the idea that some elements of urban scenes are located in a predictable manner. This assumption is confronted to realistic data o...Show More
We propose an evidential fusion algorithm between LIDAR scans and RGB images. LIDAR points are classified as either belonging to the ground, or not, and RGB images are processed by a state-of-the-art convolutional neural network to obtain semantic labels. The results are fused into an evidential grid to assess the drivability of an area met by an autonomous vehicle, while accounting for incoherenc...Show More
The product taxonomy can provide knowledge support for many knowledge-based Web services. Merging different taxonomies is an effective way to build a complete product taxonomy. To achieve this goal, a core task is to align the concepts of different taxonomies. The traditional methods of conceptual alignment are insufficient in accuracy and efficiency. In this paper, we propose a new concept alignm...Show More
In this paper, we propose a monocular multiframe high dynamic range (HDR) monocular vision system to improve the imaging quality of traditional CMOS/charge-coupled device (CCD)-based vision system for advanced driver assistance systems (ADASs). Conventional CMOS/CCD image sensors are confined to limited dynamic range that it impairs the imaging quality under undesirable environments for ADAS (e.g....Show More
In intelligent vehicle field, occupancy grid maps are popular tools for representing the environment. Usually, occupancy grids, mapping the environment as a field of uniformly distributed binary/ternary variables, are generated by various kinds of sensors (e.g. lidar, radar, monocular/binocular vision system). In literature, most of proposed occupancy grid mapping methods create array-based fixed-...Show More
Current perception systems of intelligent vehicles not only make use of visual sensors, but also take advantage of depth sensors. Extrinsic calibration of these heterogeneous sensors is required for fusing information obtained separately by vision sensors and light detection and ranging (LIDARs). In this paper, an optimal extrinsic calibration algorithm between a binocular stereo vision system and...Show More
Visual sensors and depth sensors, such as camera and LIDAR (Light Detection and Ranging) are more and more used together in current perception systems of intelligent vehicles. Fusing information obtained separately from these heterogeneous sensors always requires extrinsic calibration of vision sensors and LIDARs. In this paper, we propose an optimal extrinsic calibration algorithm between a binoc...Show More
Moving objects detection and recognition around an intelligent vehicle are active research fields. A great number of approaches have been proposed in recent decades. This paper proposes a novel approach based solely on spatial information to solve this problem. Moving objects detection is achieved in conjunction with an egomotion estimation by sparse matched feature points. For objects recognition...Show More
This paper presents a novel extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR (laser range finder). Extrinsic calibration of these heterogeneous sensors is required to fuse information obtained separately by vision sensor and LIDAR in the context of intelligent vehicle. By placing a planar chessboard at different positions and orientations in front of the sens...Show More