I. Introduction
Today's palette of robotic applications using visual sensors for Simultaneous Localization and Mapping (SLAM) is steadily growing, and this for obvious reasons. Alternatives such as ultrasonic sensors, planar laser range-finders, or time-of-flight cameras are sparse in information content, bulky, or inaccurate. The ratio between the information content given by ordinary cameras and the corresponding sensor size or weight is unmatched by any other sensor type. Especially, compact solutions such as small inspection robots or Micro Aerial Vehicles (MAV) tend towards using vision more and more. Recently, Blösch et al. [1] succeeded in the implementation of autonomous MAV navigation in unstructured environments using monocular vision as the only exteroceptive sensor.