Processing math: 100%
SparseDet: A Simple and Effective Framework for Fully Sparse LiDAR-Based 3-D Object Detection | IEEE Journals & Magazine | IEEE Xplore

SparseDet: A Simple and Effective Framework for Fully Sparse LiDAR-Based 3-D Object Detection


Abstract:

LiDAR-based sparse 3-D object detection plays a crucial role in autonomous driving applications due to its computational efficiency advantages. Existing methods either us...Show More

Abstract:

LiDAR-based sparse 3-D object detection plays a crucial role in autonomous driving applications due to its computational efficiency advantages. Existing methods either use the features of a single central voxel as an object proxy or treat an aggregated cluster of foreground points as an object proxy. However, the former cannot aggregate contextual information, resulting in insufficient information expression in object proxies. The latter relies on multistage pipelines and auxiliary tasks, which reduce the inference speed. To maintain the efficiency of the sparse framework while fully aggregating contextual information, in this work, we propose SparseDet that designs sparse queries as object proxies. It introduces two key modules: the local multiscale feature aggregation (LMFA) module and the global feature aggregation (GFA) module, aiming to fully capture the contextual information, thereby enhancing the ability of the proxies to represent objects. The LMFA module achieves feature fusion across different scales for sparse key voxels via coordinate transformations and using nearest neighbor relationships to capture object-level details and local contextual information, whereas the GFA module uses self-attention mechanisms to selectively aggregate the features of the key voxels across the entire scene for capturing scene-level contextual information. Experiments on nuScenes and KITTI demonstrate the effectiveness of our method. Specifically, SparseDet surpasses the previous best sparse detector VoxelNeXt (a typical method using voxels as object proxies) by 2.2% mean average precision (mAP) with 13.5 frames/s on nuScenes and outperforms VoxelNeXt by 1.12% \text {AP}_{\text {3-D}} on hard level tasks with 17.9 frames/s on KITTI. What is more, not only the mAP of SparseDet exceeds that of FSDV2 (a classical method using clusters of foreground points as object proxies) but also its inference speed is 1.3 times faster than FSDV2 on the nuScenes test set. The code has b...
Article Sequence Number: 5707114
Date of Publication: 26 September 2024

ISSN Information:

Funding Agency:


I. Introduction

Three-dimensional object detection is a critical task in autonomous driving, promoting the advances of intelligent transportation systems and has gained widespread attention [1], [2], [3], [4], [5], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18]. With the availability of various sensor modalities, such as cameras and LiDAR, significant progress has been made in single-modal 3-D object detection using either camera images [19], [20], [21], [22], [23] or LiDAR point clouds [1], [1], [2]. Compared to the image data provided by cameras, LiDAR point clouds offer accurate depth and position information and have led to extensive research in recent years [24], [25], [26], [27], [28], [29], [30], [31].

Contact IEEE to Subscribe

References

References is not available for this document.