I. Introduction
Depth maps play a fundamental role in many computer vision and computational photography applications, such as 3D reconstruction [1], multi-view rendering [2], virtual reality [3], and robot vision [4] etc. With the progress of sensing technology, depth information of a scene can now be readily acquired by inexpensive cameras, such as Time of Flight (ToF) camera [5] and Microsoft Kinect [6]. Nowadays, RGB-D cameras are ubiquitous and have enabled a large suite of consumer applications. However, the captured depth maps in practice usually have much lower resolution compared with the companion color image. For instance, the depth maps captured by the Time-of-flight (ToF) camera are subject to low resolutions, e.g., and . Many applications, such as 3D object reconstruction, robot navigation and automotive driver assistance, require accurate depth information in all color pixel positions. Therefore, it is an essential task to develop an effective depth super-resolution strategy to bridge the resolution gap between depth and color images.