A modular framework for model-based visual tracking using edge, texture and depth features | IEEE Conference Publication | IEEE Xplore

A modular framework for model-based visual tracking using edge, texture and depth features


Abstract:

We present in this paper a modular real-time model-based visual tracker. It is able to fuse different types of measurement, that is, edge points, textured points, and dep...Show More

Abstract:

We present in this paper a modular real-time model-based visual tracker. It is able to fuse different types of measurement, that is, edge points, textured points, and depth map, provided by one or multiple vision sensors. A confidence index is also proposed for determining if the outputs of the tracker are reliable or not. As expected, experimental results show that the more various measurements are combined, the more accurate and robust is the tracker. The corresponding C++ source code is available for the community in the ViSP library.
Date of Conference: 01-05 October 2018
Date Added to IEEE Xplore: 06 January 2019
ISBN Information:

ISSN Information:

Conference Location: Madrid, Spain

I. Introduction

The ability to accurately localize a camera with respect to an object of interest is a crucial step toward bringing dynamic manipulation in robotic vision. With a real-time process, complex tasks such as object grasping or robot positioning can then be performed in closed-loop by visual servoing to take into account perturbations and dynamic environment [1]. Augmented reality applications [2] and indoor navigation of mobile robots [3] can also be considered. Main challenges of object tracking concern the ability to not only accurately track the object but also to bring reliability and robustness.

Contact IEEE to Subscribe

References

References is not available for this document.