I. Introduction
3D detection based on point cloud has achieved significant advancements in recent years, while the majority of the models are trained and validated on datasets that focus on a single domain characterized by the same or similar scenarios. When these algorithms are deployed in different domains, noticeable performance degradation occurs. Domain adaptation including transformation across sensors, regions, and weather conditions. Obtaining datasets covering all possible domains for training is impractical. Therefore, domain adaptation and performance generalization are of critical importance for 3D detection. Light Detection and Ranging (Lidar), as an active sensor, perceives the environment through the reflected point cloud from the target. Notable disparities exist in point quality, point density, and target size across different domains. For cross-weather domain adaptation based on Lidar, factors such as snowflakes, fog, and raindrops would introduce noise, which interferes with valid environmental point clouds, resulting in a significant decline in performance. Currently, datasets for severe weather conditions are limited. Enhancing the robustness of 3D detection algorithms to severe weather is imperative. This paper focuses on domain adaptation from clear weather serving as the source domain to snowy weather serving as the target domain where snowy labels are unavailable. The pri-mary strategies for domain adaptation include self-training [1], mean teacher [4], and loss consistency [5]. ST3D++ [6] serves as an advanced domain adaptation algorithm for cross datasets and sensors domains. We propose a weather domain adaptation paradigm named PDAST (point density-aware self-training), particularly from clear to snowy weather.