Loading web-font TeX/Math/Italic
PDAST: Paradigm Towards Weather Domain Adaptation for 3D Detection Based on Density Aware Pooling and Self-Training | IEEE Conference Publication | IEEE Xplore

PDAST: Paradigm Towards Weather Domain Adaptation for 3D Detection Based on Density Aware Pooling and Self-Training


Abstract:

Most 3D detection algorithms are developed and validated on datasets collected during clear weather, while performance significantly deteriorates under severe weather con...Show More

Abstract:

Most 3D detection algorithms are developed and validated on datasets collected during clear weather, while performance significantly deteriorates under severe weather conditions such as snowy days. To address this problem, we propose PDAST, an unsupervised pipeline for weather domain adaptation. PDAST involves point density dealing (PDD), a data augmentation method, and a density-aware pooling module (PDV), both of which are plug-and-play modules to enhance algorithmic robustness in snowy weather conditions along with the self-training [1] strategy. The data augmentation method PDD randomly samples and perturbs the ground truth point database in the point cloud. Meanwhile, the PDV module combines voxel features and density-based positional encoding features as inputs to a multi-head self-attention module, facilitating the capture of global features comprising density features. In this paper, we validate the effectiveness of PDAST on a typical voxel-based 3D detection baseline, SecondIOU [2], and conduct experiments on STF [3] datasets with clear and snowy datasets. Our approach PDAST achieves a performance enhancement of around 14.9% ~ 25.5% for AP_{3D} (Average Precision Of 3D Boundingboxes) and 17.5% ~ 29.3% for AP_{AOS} (Average Precision Of Orientation Similarity) compared to the baseline detector on clear-to-snowy conditions. Furthermore, PDAST's performance in unsupervised clear-to-snowy conditions matches or exceeds the baseline model fully supervised on snowy conditions. Our PDAST alleviates the performance degradation caused by severe weather, such as snowy days, and is of great significance for all-weather perception.
Date of Conference: 14-16 October 2024
Date Added to IEEE Xplore: 28 November 2024
ISBN Information:

ISSN Information:

Conference Location: Tokyo, Japan
References is not available for this document.

I. Introduction

3D detection based on point cloud has achieved significant advancements in recent years, while the majority of the models are trained and validated on datasets that focus on a single domain characterized by the same or similar scenarios. When these algorithms are deployed in different domains, noticeable performance degradation occurs. Domain adaptation including transformation across sensors, regions, and weather conditions. Obtaining datasets covering all possible domains for training is impractical. Therefore, domain adaptation and performance generalization are of critical importance for 3D detection. Light Detection and Ranging (Lidar), as an active sensor, perceives the environment through the reflected point cloud from the target. Notable disparities exist in point quality, point density, and target size across different domains. For cross-weather domain adaptation based on Lidar, factors such as snowflakes, fog, and raindrops would introduce noise, which interferes with valid environmental point clouds, resulting in a significant decline in performance. Currently, datasets for severe weather conditions are limited. Enhancing the robustness of 3D detection algorithms to severe weather is imperative. This paper focuses on domain adaptation from clear weather serving as the source domain to snowy weather serving as the target domain where snowy labels are unavailable. The pri-mary strategies for domain adaptation include self-training [1], mean teacher [4], and loss consistency [5]. ST3D++ [6] serves as an advanced domain adaptation algorithm for cross datasets and sensors domains. We propose a weather domain adaptation paradigm named PDAST (point density-aware self-training), particularly from clear to snowy weather.

Select All
1.
Q. Xie, M.-T. Luong, E. Hovy and Q. V Le, "Self-training with noisy student improves imagenet classification", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10687-10698, 2020.
2.
Y. Yan, Y. Mao and B. Li, "Second: Sparsely embedded convolutional detection", Sensors, vol. 18, no. 10, pp. 3337, 2018.
3.
Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, et al., "Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11682-11692, 2020.
4.
A. Tarvainen and H. Valpola, "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", Advances in neural information processing systems, vol. 30, 2017.
5.
W. Zhang, W. Li and D. Xu, "SRDAN: Scale-aware and range-aware domain adaptation network for cross-dataset 3D object detection", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6769-6779, 2021.
6.
J. Yang, S. Shi, Z. Wang, H. Li and X. Qi, "St3d++: Denoised self-training for unsupervised domain adaptation on 3d object detection", IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 5, pp. 6354-6371, 2022.
7.
W. Zheng, W. Tang, L. Jiang and C.-W. Fu, "SE-SSD: Self-ensembling single-stage object detector from point cloud", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14494-14503, 2021.
8.
W. Zheng, W. Tang, S. Chen, L. Jiang and C.-W. Fu, "Cia-ssd: Confident iou-aware single-stage object detector from point cloud", Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 4, pp. 3555-3562, 2021.
9.
Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xi-aogang Wang, et al., "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10529-10538, 2020.
10.
S. Shi, Z. Wang, J. Shi, X. Wang and H. Li, "From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network", IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 8, pp. 2647-2664, 2020.
11.
X. Bai et al., "Transfusion: Robust Lidar-camera fusion for 3d object detection with transformers", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1090-1099, 2022.
12.
Z. Liu et al., "Bevfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation", 2023 IEEE international conference on robotics and automation (ICRA), pp. 2774-2781, 2023.
13.
Y. Li et al., "Deepfusion: Lidar-camera deep fusion for multimodal 3d object detection", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17182-17191, 2022.
14.
J. Yan et al., "Cross modal transformer via coordinates encoding for 3d object dectection", arXiv preprint, vol. 2, no. 3, pp. 4, 2023.
15.
Y. Li et al., "Unifying voxel-based representation with transformer for 3d object detection", Advances in Neural Information Processing Systems, vol. 35, pp. 18442-18455, 2022.
16.
Z. Luo et al., "Unsupervised domain adaptive 3d detection with multilevel consistency", Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8866-8875, 2021.
17.
Y. Wang et al., "Train in germany test in the usa: Making 3d object detectors generalize", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11713-11723, 2020.
18.
D. Hegde et al., "Uncertainty-aware mean teacher for source-free unsupervised domain adaptive 3d object detection", arXiv preprint, 2021.
19.
X. Peng, X. Zhu and Y. Ma, "C13d: Unsupervised domain adaptation for cross-Lidar 3d detection", Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, pp. 2047-2055, 2023.
20.
J. Yang et al., "St3d: Self-training for unsupervised domain adaptation on 3d object detection", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10368-10378, 2021.
21.
D. Hegde et al., "Source-free unsupervised domain adaptation for 3d object detection in adverse weather", 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 6973-6980, 2023.
22.
J. S. K. Hu, T. Kuai and S. L. Waslander, "Point density-aware voxels for Lidar 3d object detection", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8469-8478, 2022.
23.
Charles Ruizhongtai Qi, Li Yi, Hao Su and Leonidas J Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", Advances in neural information processing systems, vol. 30, 2017.
24.
N. Carion et al., "End-to-end object detection with transformers", European conference on computer vision, pp. 213-229, 2020.
25.
"OpenPCDet: An opensource tool-box for 3D object detection from point clouds", OpenPCDet Development Team., [online] Available: https://github.com/open-mmlab/OpenPCDet.

Contact IEEE to Subscribe

References

References is not available for this document.