ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer | IEEE Conference Publication | IEEE Xplore

ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer


Abstract:

Obstacle detection and tracking represent a critical component in robot autonomous navigation. In this paper, we propose ODTFormer, a Transformer-based model that address...Show More

Abstract:

Obstacle detection and tracking represent a critical component in robot autonomous navigation. In this paper, we propose ODTFormer, a Transformer-based model that addresses both obstacle detection and tracking problems. For the detection task, our approach leverages deformable attention to construct a 3D cost volume, which is decoded progressively in the form of voxel occupancy grids. We further track the obstacles by matching the voxels between consecutive frames. The entire model can be optimized in an end-to-end manner. Through extensive experiments on DrivingStereo and KITTI benchmarks, our model achieves state-of-the-art performance in the obstacle detection task. We also report comparable accuracy to state-of-the-art obstacle tracking models while requiring only a fraction of their computation cost, typically ten-fold to twenty-fold less. Our code is available on https://github.com/neu-vi/ODTFormer.
Date of Conference: 14-18 October 2024
Date Added to IEEE Xplore: 25 December 2024
ISBN Information:

ISSN Information:

PubMed ID: 40143961
Conference Location: Abu Dhabi, United Arab Emirates

Funding Agency:

References is not available for this document.

I. Introduction

Obstacle detection and tracking represents a safety-critical challenge across various domains, including robot autonomous navigation [1]–[5] and self-driving vehicles [6]–[9]. For instance, a service robot needs to detect people and pillars surrounding it, track their motions (if any), or even predict their future trajectories to avoid collision. Accurate obstacle detection and tracking are crucial components of autonomous navigation systems, particularly in state-based frameworks, to ensure collision-free navigation [10]–[13]. Recent research efforts focus on using low-cost visual sensors for obstacle perception to improve affordability [10], [14]–[17] compared with costly ones (e.g., LiDAR). In this paper, we concentrate on a specific line of research employing stereo cameras, which offer higher 3D perception accuracy, extended sensing range, and enhanced agility for robots compared to monocular-based systems [18], [19].

Getting results...

Contact IEEE to Subscribe

References

References is not available for this document.