Spatial domain complexity reduction method for depth image based rendering using wavelet transform | IEEE Conference Publication | IEEE Xplore

Spatial domain complexity reduction method for depth image based rendering using wavelet transform


Abstract:

Depth Image Based Rendering (DIBR) is an approach to generate a 3-D image by the original 2-D color image with the corresponding 2-D depth map. Although DIBR is a quite c...Show More

Abstract:

Depth Image Based Rendering (DIBR) is an approach to generate a 3-D image by the original 2-D color image with the corresponding 2-D depth map. Although DIBR is a quite convenient technique of converting 2D to 3D images, there is a big problem in DIBR system that it cannot reach real-time processing due to the computing time. Therefore, this paper proposes a method based on discrete wavelet transform and adaptive edge-oriented smoothing process to improve the computing time of the system. The proposed method also preserves the original texture. As a results, this indicate that the proposed method not only preserves the vertical texture but also reduces at least 60% of the computing time of the DIBR system.
Date of Conference: 12-15 November 2013
Date Added to IEEE Xplore: 09 January 2014
ISBN Information:
Conference Location: Naha, Japan
References is not available for this document.

I. INTRODUCTION

The three-dimensional (3-D) images have become more and more popular in our life. It provides users with a feeling of presence of 3-D view from simulation of reality. Generally, stereoscopic images must be synthesized by at least two or more 2-D images such that it needs to take more than two times of transmission time and storage space in the coding and transmission stage. European IST started the ATTEST project [1], in which they proposed a Depth Image Based Rendering (DIBR) technology, such that the image for 3-D synthesis can be generated from a single 2-D image and the corresponding depth map to improve the deficiency of the traditional 3-D TV. The DIBR technology not only reduces the transmission time but also reduces storage space of images. There are some factors to affect the quality of DIBR: 1) Occlusion and disocclusion generated by image warping; 2) Imperfect depth map; 3) It cannot preserve the completed horizontal and vertical texture. Occlusion will happen after image warping because the background pixels and foreground pixels may overlap. It can use the foreground pixels to replace the background pixels [2] to resolve the occlusion. Disocclusion usually happens on the edge of the foreground and background of the depth map. It will appear in the newly exposed areas (so called “holes”) in the virtual image after warping. Several approaches were proposed to fill the holes, such as interpolation [3], extrapolation [3], mirroring [4], and image inpainting [4]. Chen et al. [5] proposed an approach by using a smoothing filter on the depth map to reduce the holes. However, this approach is not good to preserve the vertical texture. For solving this problem, Zhang et al. [7] proposed using the asymmetric smoothing filters to preserve the vertical texture. However, the use of asymmetric smoothing filters to process the whole depth map, the horizontal texture may be destroyed. Therefore, Lee et al. [8] proposed a smoothing filter based on the analysis of vertical texture to overcome the difficulties. Tam et al. used the symmetric smoothing filter to process the weaker vertical texture and used the asymmetric smoothing filter [7] to process the stronger vertical texture, and by such arrangement the disocclusion artifacts were incrementally removed as the smoothing of depth maps became stronger. This approach can reduce the geometric distortions as well.

Select All
1.
A. Redert, M. O. de Beeck, C. Fehn, W. Ijsselsteijn, M. Pollefeys, L. V. Gool, E. Ofek, I. Sexton, and P. Surman, "ATTEST: advanced three-dimensional television system technologies," International Symposium on 3D Data Processing Visualization and Transmission, pp. 313-319, June 2002.
2.
Q. H. Nguyen, M. N. Do, and S. J. Patel, "Depth image-based rendering from multiple cameras with 3D propagation algorithm," International Conference on Immersive Telecommunications, pp.1-6, May 2009.
3.
C. Vázquez, W. J. Tam, and F. Speranza, "Stereoscopic imaging: filling disoccluded areas in depth image-based rendering," Proceedings of the SPIE, vol. 6392, October 2006.
4.
C.-M. Cheng, S.-J. Lin, S.-H. Lai, and J.-C. Yang, "Improved novel view synthesis from depth image with large baseline," International Conference on Pattern Recognition, pp. 1-4, December 2008.
5.
W. Y. Chen, Y. L. Chang, S.F. Lin, L. F. Ding, and L.G.Chen, "Efficient Depth Image Based Rendering with Edge Dependent Depth Filter and Interpolation," IEEE International conference on Multimedia and Expo, pp. 1314-1317, July 2005.
6.
W. J. Tam, G. Alain, L. Zhang, T. Martin, and R. Renaud, "Smoothing depth maps for improved stereoscopic image quality," Proceedings of SPIE, vol. 5599, pp.162-172, October 2004.
7.
L. Zhang and W. J. Tam, "Stereoscopic image generation based ondepth images for 3DTV," IEEE Transactions on Broadcasting, vol. 51, no. 2, pp. 191-199, June 2005.
8.
P.-J. Lee and Effendi, "Nongeometric distortion smoothing approach for depth map preprocessing," IEEE Transactions on Multimedia, vol. 13, no. 2, pp. 246-254, April 2011.
9.
A. Woods, T. Docherty, and R. Koch, "Image distortions in stereoscopic video systems," Proceedings of the SPIE, vol. 1915, pp. 36-48, February 1993.
10.
T. Vijayaraghavan and K. Rajan, "Image coding of 3D volume using wavelet transform for fast retrieval of 2D images," Proceedings of IEE on Image and Signal Processing, Vision, vol. 153, no. 4, pp. 507-511, August 2006.
11.
M. Ravasi, L. Tenze, and M, Mattavelli, "A scalable and programmable architecture for 2-D DWT decoding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 8, pp. 671-677, August 2002.
12.
C.-H. Hsia, J.-M. Guo, and J.-S. Chiang, "Improved low-complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme," IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 8, pp.1202-1208, August 2009.
13.
B. Sugandi, H. Kim, J. K. Tan, and S. Ishikawa, "Real time tracking and identification of moving persons by using a camera in outdoor environment," International Journal on Innovative Computing, Information and Control, vol. 5, no. 5, pp. 1179-1188, May 2009.
14.
J.-C. Huang and W.-S. Hsieh, "Wavelet-based moving object segmentation," IET Electronics Letters, vol. 39, no. 39, pp. 1380-1382, September 2003.
15.
S. Cvetkovic, P. Bakker, J. Schirris, and P. H. N. de With, "Background estimation and adaptation model with light-change removal for heavily down-sampled video surveillance signals," IEEE International Conference on Image Processing, pp. 1829-1832, October 2006.
16.
Y.-L. Tian and A. Hampapur, "Robust salient motion detection with complex background for real-time video surveillance," IEEE Workshop on Motion and Video Computing, vol. 2, pp. 30-35, January 2005.
17.
F.-H. Cheng and Y.-L. Chen, "Real time multiple objects tracking and identification based on discrete wavelet transform," Pattern Recognition, vol. 39, no. 3, pp. 1126-1139, June 2006.
18.
C.-H. Hsia and J.-M. Guo, "Improved directional lifting-based discrete wavelet transform for low resolution moving object detection," IEEE International Conference on Image Processing, pp. 2457-2460, September 2012.
19.
http://vision.middlebury.edu/stereo/data/

Contact IEEE to Subscribe

References

References is not available for this document.