I. Introduction
Object detection in 3D point clouds is crucial to many ap-plications of robotics and autonomous vehicles. Point clouds, captured by sensors such as LiDARs, provide accurate 3D information about the system's surroundings. However, it is more difficult to process them with deep neural networks in order to extract actionable semantic information. Indeed, point clouds, unlike images that are a dense regular grid of pixels, are irregular, unstructured and unordered [1]. Moreover, LiDAR point clouds suffer from multiple types of occlusion and signal miss [2]. One type of occlusion is external-occlusion, which is caused by obstacles blocking the laser from reaching the objects. Self-occlusion happens when an object's near side hides its far side. It is inevitable and will affect every object in a LiDAR scan. Signal miss can be caused by reflective materials reflecting the laser beam away from the sensor or by low reflectance. This often leads to objects appearing incomplete in LiDAR scans.