Loading [a11y]/accessibility-menu.js
Hierarchical Point Attention for Indoor 3D Object Detection | IEEE Conference Publication | IEEE Xplore

Hierarchical Point Attention for Indoor 3D Object Detection


Abstract:

3D object detection is an essential vision technique for various robotic systems, such as augmented reality and domestic robots. Transformers as versatile network archite...Show More

Abstract:

3D object detection is an essential vision technique for various robotic systems, such as augmented reality and domestic robots. Transformers as versatile network architectures have recently seen great success in 3D point cloud object detection. However, the lack of hierarchy in a plain transformer restrains its ability to learn features at different scales. Such limitation makes transformer detectors perform worse on smaller objects and affects their reliability in indoor environments where small objects are the majority. This work proposes two novel attention operations as generic hierarchical designs for point-based transformer detectors. First, we propose Aggregated Multi-Scale Attention (MS-A) that builds multi-scale tokens from a single-scale input feature to enable more fine-grained feature learning. Second, we propose Size-Adaptive Local Attention (Local-A) with adaptive attention regions for localized feature aggregation within bounding box proposals. Both attention operations are model-agnostic network modules that can be plugged into existing point cloud transformers for end-to-end training. We evaluate our method on two widely used indoor detection benchmarks. By plugging our proposed modules into the state-of-the-art transformer-based 3D detectors, we improve the previous best results on both benchmarks, with more significant improvements on smaller objects.
Date of Conference: 13-17 May 2024
Date Added to IEEE Xplore: 08 August 2024
ISBN Information:
Conference Location: Yokohama, Japan

I. INTRODUCTION

3D computer vision models (e.g., object detectors) help robotic and control systems perceive and understand the environment from 3D data (e.g., point cloud), which provides more accurate geometric and spatial information and is robust to illumination and domain shifts. Since point clouds do not have a grid-like structure as images, previous works have proposed various neural network architectures for point cloud understanding [1] – [13]. With the success of attention-based architectures (i.e., transformers) in other learning regime [14] – [16], it has recently been applied to point clouds [17] – [23]. Some properties of transformers make them ideal for modeling point clouds. For example, their permutation-invariant property is necessary for modeling unordered sets like point clouds, and their attention mechanism helps learn long-range relationships and capture global context.

Contact IEEE to Subscribe

References

References is not available for this document.