I. Introduction
Perception is an important part of autonomous driving systems, with navigation and decision making relying heavily on the ability of the vehicle to correctly localize and classify the objects around it. Recent pure lidar-based 3D object detectors have proven to perform extremely well on large public datasets and have topped their challenge leaderboards [1]–[3]. Although containing similar scenes of roads, pedestrians, and vehicles, they tend to differ from each other in terms of pointcloud density, the average size of lanes, as well as the types of vehicles present [4]. This is due to the fact that these datasets are collected using different types of lidar sensors in varying locations around the world, and at times, under varying weather conditions. In weather scenarios such as rain, snow and fog, data from lidar sensors may get corrupted due to reduced signal-to-noise ratio (SNR) or scattered power from the droplets and particles in the air.