1. Introduction
The use of cameras to digitize the geometry, texture, lighting and motion of arbitrary scenes is a fundamental problem in computer vision. General monocular solutions remain elusive, but practical algorithms have been developed that leverage motion, shape or appearance priors, and/or require instrumentation of the scene using motion markers or multiple calibrated cameras.
(Top) our TwinPod hybrid depth camera captures highspeed performance with a pair of high (resp. Low) framerate and low (resp. high) resolution sensors. (bottom-left) The reconstruction in the canonical frame obtained by our re-implementation of DynamicFusion on the 30FPS stream. (bottom-right) Our TwinFusion algorithm efficiently resolves motion from the high-framerate camera and exploits this information to guide the high-resolution non-rigid reconstruction.