Loading [MathJax]/extensions/MathMenu.js
APFN: Adaptive Perspective-Based Fusion Network for 3-D Place Recognition | IEEE Journals & Magazine | IEEE Xplore

APFN: Adaptive Perspective-Based Fusion Network for 3-D Place Recognition


Abstract:

Place recognition holds a pivotal influence in the field of computer vision. Feature pyramid, an advanced architecture introduced into place recognition, aims to produce ...Show More

Abstract:

Place recognition holds a pivotal influence in the field of computer vision. Feature pyramid, an advanced architecture introduced into place recognition, aims to produce features with richer semantic content. However, the existing methods ignore the efficient utilization of low-level features. To tackle this issue, we propose a novel place recognition architecture called the adaptive perspective-based fusion network (APFN). The main benefits of APFN lie in three aspects: 1) it adaptively optimizes the appropriate perspective and assigns the appropriate perspective-based weights dynamically for the multiscale low-level feature maps by a newly designed adaptive perspective-based attention (APA) module; 2) it effectively enhances the extracted low-level features and significantly shortens the transmission distance of low-level information; and 3) it enhances global information extraction via supervising the generation of high-level features by regularization. Extensive experiments on several public datasets validate the effectiveness of our method. APFN outperforms previous baseline methods by 1.6% points in average recall at top-1% (AR@1%) and 1.2% points in average recall at top-1 (AR@1) metrics.
Article Sequence Number: 2400110
Date of Publication: 24 June 2024

ISSN Information:

Funding Agency:


I. Introduction

Place recognition [1], [2], [3] aims to retrieve the most similar scene in the geotagged scene database, in order that the correct location of the given query scene can be determined. With the advancement of 3-D sensors, LiDAR-based place recognition is playing an increasingly important role in the fields of computer vision and robotics communities, such as robot navigation [4], [5], [6], autonomous driving [7], [8], [9], and augmented reality [10], [11]. In this article, our focus lies in discovering a discriminative descriptor via LiDAR-based place recognition, as opposed to image-based place recognition, because LiDAR point clouds are more robust to illumination, weather, and seasonal changes [12]. Fig. 1 displays our pipeline.

Pipeline of LiDAR-based place recognition. All query point clouds and point clouds in the database are transformed into descriptors through the APFN model. Afterward, the recognition task is performed by searching for the closest descriptor of the query in the database.

Contact IEEE to Subscribe

References

References is not available for this document.