Point-Voxel Fusion for 3D Object Detection | IEEE Conference Publication | IEEE Xplore

Point-Voxel Fusion for 3D Object Detection


Abstract:

In 3D object detection, network prediction accuracy is greatly affected by point cloud's feature richness. However, the feature richness depends on fine-grained features ...Show More

Abstract:

In 3D object detection, network prediction accuracy is greatly affected by point cloud's feature richness. However, the feature richness depends on fine-grained features extracted by the network. Currently some methods use voxel encoding approach continuously down-scaled by 3D convolution to improve the detection efficiency, but lose too many fine-grained features. Some methods directly inputting the original point cloud into the Multi-layer Perceptron (MLP) for feature extraction, which can retain more fine-grained features, but greatly reduce the detection efficiency. This work combines voxel features and point features to obtain a fused 3D map. We use an attention mechanism module that combines semantic features with spatial features to progress the former 3D feature map, which is used to constitute a richer 3D feature structure to reduce the loss of Z-axis features. Since the object geometry structure information is important for the detection task, we design a geometry-oriented auxiliary network that is jointly optimized by supervising two tasks in the training phase to guide the backbone network to understand the target structure features and discard them in the inference phase. The experiments show that our proposed detection method outperforms some previous methods in KITTI 3D/BEV detection.
Date of Conference: 24-26 July 2023
Date Added to IEEE Xplore: 18 September 2023
ISBN Information:

ISSN Information:

Conference Location: Tianjin, China
No metrics found for this document.

Usage
Select a Year
2025

View as

Total usage sinceSep 2023:108
00.511.522.53JanFebMarAprMayJunJulAugSepOctNovDec200000000000
Year Total:2
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.