Structure from stereo vision using unsynchronized cameras for simultaneous localization and mapping | IEEE Conference Publication | IEEE Xplore

Structure from stereo vision using unsynchronized cameras for simultaneous localization and mapping


Abstract:

This paper presents a system for automatic reconstruction of 3D structure using two unsynchronized cameras. Three images are acquired sequentially from the left, right, a...Show More

Abstract:

This paper presents a system for automatic reconstruction of 3D structure using two unsynchronized cameras. Three images are acquired sequentially from the left, right, and again from the left camera. A virtual image from the left camera synchronized with the right image is created by interpolating matching points of interest (SIFT features) in the two left images. Both geometric and probabilistic criteria are used to select the correct set of matching features amongst the three views. In an indoor environment, the method typically results in 3D structure with approximately 200 feature points, with a median 3D accuracy of 1.6 cm when the average depth is 3 m and the robot has moved 1-2 cm between each image acquisition.
Date of Conference: 02-06 August 2005
Date Added to IEEE Xplore: 05 December 2005
Print ISBN:0-7803-8912-3

ISSN Information:

Conference Location: Edmonton, AB, Canada

I. Introduction

The degree of autonomy of a robot is very dependent on its ability to perform simultaneous localization and mapping (SLAM). An autonomous robot should be able to explore its environment without user intervention, build a reliable map, and be able to localize itself within that map. Many approaches to the SLAM problem can be found in the literature, but the most promising one in terms of cost, computation, and accuracy is the so-called visual simultaneous localization and mapping (vSLAMTM) [8], [9]. In vSLAM, localization is achieved based on recognizing a visual landmark that is nothing but a set of unique and identifiable feature points in 3D space. If a subset of these features are later observed, the vSLAM system can calculate the new robot's new pose [5].

References

References is not available for this document.