Loading [MathJax]/extensions/MathMenu.js
Event-and Frame-Based Visual-Inertial Odometry With Adaptive Filtering Based on 8-DOF Warping Uncertainty | IEEE Journals & Magazine | IEEE Xplore

Event-and Frame-Based Visual-Inertial Odometry With Adaptive Filtering Based on 8-DOF Warping Uncertainty


Abstract:

In this letter, we present an event- and frame-based visual-inertial odometry (VIO) algorithm that fuses frames, events, and inertial measurement in a robust and adaptive...Show More

Abstract:

In this letter, we present an event- and frame-based visual-inertial odometry (VIO) algorithm that fuses frames, events, and inertial measurement in a robust and adaptive manner. Frames from standard cameras provide rich context of the scene at a fixed rate. Event cameras on the other hand asynchronously produce events at a pixel-level when changes in intensity occur, and thus are resilient to motion blur and have high dynamic range. To harness the advantages of the two sensors, our frontend fuses their outputs by creating brightness increment patches of each output and minimize the differences with an 8-DOF warping model. The warping model and the optimization process allow for robust feature tracking in the frontend of the algorithm. The minimized residual is then used in the multi-state filter-based backend where the measurement update is adaptively performed depending on the size of the residual for accurate estimation, reflecting the quality of the tracked features. Comparative evaluation on two publicly available datasets reveals that our method outperforms the state-of-the-art event-based VIO algorithms in pose estimation accuracy.
Published in: IEEE Robotics and Automation Letters ( Volume: 9, Issue: 2, February 2024)
Page(s): 1003 - 1010
Date of Publication: 05 December 2023

ISSN Information:

Funding Agency:


I. Introduction

Accurately estimating the ego-motion of a sensor is a crucial factor in various applications, including autonomous robotics and augmented/virtual reality. Recently, numerous vision-based algorithms have been proposed to solve such problem and have shown great progress. Yet, conventional cameras possess innate limitations of motion blur and low dynamic range. In situations of high-speed motions or environments with high dynamic range, the navigation algorithms using only the standard cameras perform underwhelmingly or even fail.

Contact IEEE to Subscribe

References

References is not available for this document.