Loading [MathJax]/extensions/MathMenu.js
Depth Image Inpainting for RGB-D Camera Based on Light Field EPI | IEEE Conference Publication | IEEE Xplore

Depth Image Inpainting for RGB-D Camera Based on Light Field EPI


Abstract:

RGB-D camera is an active depth measurement device. Depth images collected by RGB-D camera usually have depth errors at objects edges and corners in image. Current resear...Show More

Abstract:

RGB-D camera is an active depth measurement device. Depth images collected by RGB-D camera usually have depth errors at objects edges and corners in image. Current researches of depth image inpainting methods were focusing on exploiting existed depth data and making use of corresponding RGB image. This paper proposes a method to inpaint depth image by using Light Field EPI (Epipolar Plane Image), which has a special linear structure. We present a software framework to extra objects edges depth data from light field EPI. Then we merge the depth image and light field EPI edges depth data to an integral depth image. Experimental results demonstrate the accurateness of the proposed depth image inpainting method, especially for thin objects and edges in image.
Date of Conference: 27-29 June 2018
Date Added to IEEE Xplore: 18 October 2018
ISBN Information:
Conference Location: Chongqing, China
References is not available for this document.

I. Introduction

In augmented reality and 3D reconstruction fields, one of the key steps is capturing scene depth image. Active depth measurement equipment which projects light to the scene can easily capture depth image. Laser range scanner techniques usually achieve high accuracy [1]. However, it causes lots of time because of its slice-by-slice scanning mode. RGB-D camera is a kind of active depth measurement equipment. It contains RGB and depth cameras to capture color and depth images. Time-of-flight (TOF) technology is an advance active depth measure method. It emits light to the space and calculates the depth according to the phase difference between emitted and reflected lights. With TOF, camera's measurement speed and accuracy have been improved. Depth camera generally has a safe distance. If the measurement distance is less or further than its safe distance, ghost area and holes will occur in depth image. Also, when objects obscure each other in the scene, it will cause more holes at the edges in image.

RGB and depth images of kinect V2

Select All
1.
P Thanusutiyabhorn, P Kanongchaiyos and W S. Mohammed, "Image-based 3D laser scanner[C]//Electrical Engineering/Electronics Computer Telecommunications and Information Technology (ECTI-CON)", 2011 8th International Conference on, pp. 975-978, 2011.
2.
S Izadi, D Kim, O Hilliges et al., "KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera[C]", Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 559-568, 2011.
3.
S Paris and F. Durand, "A fast approximation of the bilateral filter using a signal processing approach[J]", Computer Vision-ECCV 2006, pp. 568-580, 2006.
4.
H Xue, S Zhang and D. Cai, "Depth image inpainting: Improving low rank matrix completion with low gradient regularization [J]", IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4311-4320, 2017.
5.
P Buyssens, O Le Meur, M Daisy et al., "Depth-guided disocclusion inpainting of synthesized RGB-D images[J]", IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 525-538, 2017.
6.
C Chen, J Cai, J Zheng et al., "A color-guided region-adaptive and depth-selective unified framework for Kinect depth recovery[C]", Multimedia Signal Processing (MMSP) 2013 IEEE 15th International Workshop on, pp. 007-012, 2013.
7.
S Bhattacharya, S Gupta and K S. Venkatesh, "High accuracy depth filtering for Kinect using edge guided inpainting[C]", Advances in Computing Communications and Informatics (ICACCI 2014 International Conference on, pp. 868-874, 2014.
8.
A Maimone and H. Fuchs, "Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras[C]", Mixed and Augmented Reality (ISMAR) 2011 10th IEEE International Symposium on, pp. 137-146, 2011.
9.
W Song, A V Le, S Yun et al., "Depth completion for kinect v2 sensor[J]", Multimedia Tools and Applications, vol. 76, no. 3, pp. 4357-4380, 2017.
10.
J Yang, X Ye, K Li et al., "Color-guided depth recovery from RGB-D data using an adaptive autoregressive model[J]", IEEE transactions on image processing, vol. 23, no. 8, pp. 3443-3458, 2014.
11.
C Yang, X Lu, Z Lin et al., High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis[J], 2016.
12.
L Chen, H Lin and S. Li, "Depth image enhancement for Kinect using region growing and bilateral filter[C]", Pattern Recognition (ICPR) 2012 21st International Conference on, pp. 3070-3073, 2012.
13.
C Kim, H Zimmer, Y Pritch et al., "Scene reconstruction from high spatio-angular resolution light fields[J]", ACM Trans. Graph., vol. 32, no. 4, pp. 73:1-73:12, 2013.
14.
S Wanner and B. Goldluecke, "Variational light field analysis for disparity estimation and super-resolution[J]", IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 3, pp. 606-619, 2014.
15.
S J Gortler, R Grzeszczuk, R Szeliski et al., "The lumigraph[C]", Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 43-54, 1996.
16.
A. Telea, "An image inpainting technique based on the fast marching method[J]", Journal of graphics tools, vol. 9, no. 1, pp. 23-34, 2004.
17.
V R Duseev and A N. Malchukov, "Kinect sensor depth data filtering[C]//Mechanical Engineering Automation and Control Systems (MEACS)", 2014 International Conference on, pp. 1-4, 2014.
18.
Z. Zhang, "A flexible new technique for camera calibration[J]", IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
19.
JY. Bouguet, Camera calibration toolbox[CP], [online] Available: http://www.vision.cal-tech.edu/bouguetj/calib\_doc.
20.
NYU RGB-D datasets[DB], [online] Available: https://cs.nyu.edu/~silberman/datasets/n-yu\_depth\_v2.html.
21.
A Levin, D Lischinski and Y. Weiss, "Colorization using optimization[C]", ACM transactions on graphics (tog), vol. 23, no. 3, pp. 689-694, 2004.

Contact IEEE to Subscribe

References

References is not available for this document.