I. Introduction
The perception systems of autonomous vehicles usually rely on data fusion between LiDAR and camera [1], [2]. In this letter, we discuss the measurement fusion, i.e., project 3-D LiDAR points on a camera image. A classic approach is utilizing the extrinsic calibration [3] to estimate a 6DoF rigid transform from LiDAR to the camera under the assumption that the sensor measures are acquired simultaneously. Nevertheless, this premise does not hold for moving vehicles with asynchronous sensors, i.e., the sensors are working independently with different frequencies. In such architecture, the sensor timestamps are usually coordinated by software, e.g., network timing technology (NTP) or precision time protocol (PTP). While there are always time differences between sensor outputs. Moreover, due to cost constraints, rolling shutter cameras prevail in the automotive industry. Due to the rolling shutter, distortions are generated in dynamic scenarios because the pixels are not acquired at the same time. The issues of asynchrony and rolling shutter distortion are amplified by the movement of the vehicle, which leads to misaligned LiDAR and camera data [4]. Fig. 1 demonstrates the problem of synchronization between a rolling shutter camera and a LiDAR in a moving platform. Another problem in projecting LiDAR points into image pixels is the occlusion. Due to the different positions of LiDAR and the camera, not every LiDAR points within the camera's FoV can be associated with an image pixel. Therefore, simply projecting the LiDAR points into an image by the extrinsic parameters is insufficient in real applications. The asynchrony and occlusion between sensors must be handled.