Spatial-Temporal Measurement Alignment of Rolling Shutter Camera and LiDAR | IEEE Journals & Magazine | IEEE Xplore

Spatial-Temporal Measurement Alignment of Rolling Shutter Camera and LiDAR


Abstract:

This letter proposes a novel method to fuse the asynchronized outputs from a rolling shutter camera and a spinning LiDAR mounted on a moving vehicle. Compared with tradit...Show More

Abstract:

This letter proposes a novel method to fuse the asynchronized outputs from a rolling shutter camera and a spinning LiDAR mounted on a moving vehicle. Compared with traditional methods only relying on intrinsic/extrinsic calibration, the proposed method takes the ego-motion, rolling shutter distortion, and occlusion into the fusion model. In essence, the method estimates the temporal offset between LiDAR and the camera by minimizing the reprojection error of the row values between consecutive iterations. Hence, it can precisely project the 3-D LiDAR points into the image pixels from a rolling shutter camera moving at high speed, which is critical for a multisensory autonomous driving system. Field tests based on a vehicle prototype verify the performance enhancement of the proposed method, particularly in high-speed conditions. This method is especially helpful for engineering practices on low-level sensor fusion of LiDAR and rolling shutter camera.
Published in: IEEE Sensors Letters ( Volume: 6, Issue: 12, December 2022)
Article Sequence Number: 6004204
Date of Publication: 02 December 2022
Electronic ISSN: 2475-1472
No metrics found for this document.

I. Introduction

The perception systems of autonomous vehicles usually rely on data fusion between LiDAR and camera [1], [2]. In this letter, we discuss the measurement fusion, i.e., project 3-D LiDAR points on a camera image. A classic approach is utilizing the extrinsic calibration [3] to estimate a 6DoF rigid transform from LiDAR to the camera under the assumption that the sensor measures are acquired simultaneously. Nevertheless, this premise does not hold for moving vehicles with asynchronous sensors, i.e., the sensors are working independently with different frequencies. In such architecture, the sensor timestamps are usually coordinated by software, e.g., network timing technology (NTP) or precision time protocol (PTP). While there are always time differences between sensor outputs. Moreover, due to cost constraints, rolling shutter cameras prevail in the automotive industry. Due to the rolling shutter, distortions are generated in dynamic scenarios because the pixels are not acquired at the same time. The issues of asynchrony and rolling shutter distortion are amplified by the movement of the vehicle, which leads to misaligned LiDAR and camera data [4]. Fig. 1 demonstrates the problem of synchronization between a rolling shutter camera and a LiDAR in a moving platform. Another problem in projecting LiDAR points into image pixels is the occlusion. Due to the different positions of LiDAR and the camera, not every LiDAR points within the camera's FoV can be associated with an image pixel. Therefore, simply projecting the LiDAR points into an image by the extrinsic parameters is insufficient in real applications. The asynchrony and occlusion between sensors must be handled.

Usage
Select a Year
2025

View as

Total usage sinceDec 2022:302
01234JanFebMarAprMayJunJulAugSepOctNovDec232000000000
Year Total:7
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.