I. Introduction
While there has been tremendous progress in the development of state estimation and simultaneous localization and mapping (SLAM) algorithms in recent years, dynamic motion can still induce failure on even the most robust systems [1]. More specifically, methods for state estimation and SLAM that rely on visual information experience a significant decrease in the performance of visual feature tracking when there are rapid changes in the viewing angle of cameras onboard a robot.