I. Introduction
In many applications, geometric reconstruction from multiple views is an important problem. For image-based rendering and 3D scanning, accurate and visually pleasing models are desired, whereas higher level robotic planning requires concise geometrical representations of an environment. While great progress has been made with feature point based structure from motion (SfM) techniques in recent years, the reconstruction of scenes containing non-textured surfaces is still a major challenge. Many manmade environments do not even provide sufficient feature points to recover the camera motion [1], not to mention surface reconstruction. Even partial lack of texture quickly gives rise to holes with conventional techniques [2], [3]. For robotic planning, this is usually not acceptable, which is why active vision systems (e.g. laser scanners or RGB-D sensors [4]) are mostly employed. However despite the lack of point features, the structural information of sparsely or non-textured scenes is still visually deducible from edges (by which we mean intensity gradient maxima). Using piecewise straight line segments as a compact edge representation, we aim to reconstruct completely untextured scenes which are only partially visible in each frame as is the case during indoor exploration.