Loading [MathJax]/extensions/MathMenu.js
A Novel Method of Minimizing View Synthesis Distortion Based on Its Non-Monotonicity in 3D Video | IEEE Journals & Magazine | IEEE Xplore

A Novel Method of Minimizing View Synthesis Distortion Based on Its Non-Monotonicity in 3D Video


Abstract:

In depth-based 3D video, the view synthesis distortion (VSD), is generally measured by modeling the effect of texture and depth errors separately. With such a development...Show More

Abstract:

In depth-based 3D video, the view synthesis distortion (VSD), is generally measured by modeling the effect of texture and depth errors separately. With such a development, it has been referred that the VSD changes monotonically with respect to to both the texture and depth distortions. In this paper, we find that the VSD does not always change monotonically with them by both theoretical analysis and experimental test, when the effect of the texture and depth errors is considered together. Specifically, first, we prove that the VSD is non-monotonic with the texture distortion. That is, the VSD increases with the increasing texture distortion at higher distortion range but conversely decreases with it at lower range. It is different from the general scenario that only considering the effect of the texture errors. We also analytically depict their relationship with low computational cost and identify the turning point at which the change of the VSD is converted. Second, we confirm that the VSD is always monotonic with the depth distortion, which is consistent with the general scenario that only considering the effect of the depth errors. The non-monotonicity property of the VSD can be utilized to improve the viewing performance of 3D video in relevant applications, since a minimal value of the VSD exists at the turning point. We conduct two applications for this purpose. First, it is used to generate the synthesis view of minimal distortion, which achieves 0.51-dB gain of PSNR on average for the tested scenarios. Second, it is used for lossy compression of texture videos in 3D video, which reduces the coding rate by 24% on average for the tested scenarios, meanwhile, keeps the VSD not increased simultaneously.
Published in: IEEE Transactions on Image Processing ( Volume: 26, Issue: 11, November 2017)
Page(s): 5122 - 5137
Date of Publication: 05 July 2017

ISSN Information:

PubMed ID: 28692975

Funding Agency:

Citations are not available for this document.

I. Introduction

Over the last decade, 3D video technique has gained more and more attentions due to its capability in providing realistic 3D viewing experience. Depth-based 3D video [1], [24] is considered the most promising solution due to its efficient representation and flexibility in view generation compared with general stereo video [47] and multi-view video [46], which introduces depth map perception in addition to conventional flat texture video, such as multi-view-plus-depth video and free-viewpoint video. In this solution, texture videos of different views are captured together with the associated depth videos. At the terminals, desired views are generated with neighboring texture and depth videos by view synthesis technique like depth-image-based rendering (DIBR) [2], [3].

Cites in Papers - |

Cites in Papers - IEEE (5)

Select All
1.
Ming Cheng, Yiling Xu, Wang Shen, M. Salman Asif, Chao Ma, Jun Sun, Zhan Ma, "H2-Stereo: High-Speed, High-Resolution Stereoscopic Video System", IEEE Transactions on Broadcasting, vol.68, no.4, pp.886-903, 2022.
2.
Pan Gao, Aljosa Smolic, "Occlusion-Aware Depth Map Coding Optimization Using Allowable Depth Map Distortions", IEEE Transactions on Image Processing, vol.28, no.11, pp.5266-5280, 2019.
3.
Meng Yang, Ce Zhu, Xuguang Lan, Nanning Zheng, "Efficient Estimation of View Synthesis Distortion for Depth Coding Optimization", IEEE Transactions on Multimedia, vol.21, no.4, pp.863-874, 2019.
4.
Meng Yang, Nanning Zheng, "SynBF: A New Bilateral Filter for Postremoval of Noise From Synthesis Views in 3-D Video", IEEE Transactions on Multimedia, vol.21, no.1, pp.15-28, 2019.
5.
Linwei Zhu, Yun Zhang, Shiqi Wang, Hui Yuan, Sam Kwong, Horace H.-S. Ip, "Convolutional Neural Network-Based Synthesized View Quality Enhancement for 3D Video Coding", IEEE Transactions on Image Processing, vol.27, no.11, pp.5365-5377, 2018.

Cites in Papers - Other Publishers (2)

1.
Chang Liu, Ke-bin Jia, Peng-yu Liu, "A Convolutional Neural Network-Based Complexity Reduction Scheme in 3D-HEVC", Artificial Intelligence and Security, vol.12239, pp.279, 2020.
2.
Chang Liu, Kebin Jia, Pengyu Liu, "An Improvement for View Synthesis Optimization Algorithm", Genetic and Evolutionary Computing, vol.834, pp.65, 2019.
Contact IEEE to Subscribe

References

References is not available for this document.