1. Introduction
Estimating the pose (i.e., position and orientation) of 3D objects from a single RGB image is an important task in computer vision. This field is often subdivided into specific tasks, e.g., 6DoF pose estimation for robot manipulation and 3D object detection for autonomous driving. Although they share the same fundamentals of pose estimation, the different nature of the data leads to biased choice of methods. Top performers [29], [42], [44] on the 3D object detection benchmarks [6], [14] fall into the category of direct 4DoF pose prediction, leveraging the advances in end-to-end deep learning. On the other hand, the 6DoF pose estimation benchmark [19] is largely dominated by geometry-based methods [20], [46], which exploit the provided 3D object models and achieve a stable generalization performance. However, it is quite challenging to bring together the best of both worlds, i.e., training a geometric model to learn the object pose in an end-to-end manner.
EPro-PnP is a general solution to end-to-end 2D-3D correspondence learning. In this paper, we present two distinct networks trained with EPro-PnP: (a) An off-the-shelf dense correspondence network whose potential is unleashed by end-to-end training, (b) a novel deformable correspondence network that explores new possibilities of fully learnable 2D-3D points.