Loading [a11y]/accessibility-menu.js
Efficient plenoptic imaging representation: Why do we need it? | IEEE Conference Publication | IEEE Xplore

Efficient plenoptic imaging representation: Why do we need it?


Abstract:

The 3D representation of the world visual information has been a challenge for a long time both in the analogue and digital domains. At least in the past decade, 3D stere...Show More

Abstract:

The 3D representation of the world visual information has been a challenge for a long time both in the analogue and digital domains. At least in the past decade, 3D stereo-based solutions have become very common. However, several constraints and limitations ended up causing a negative impact on its user popularity and market deployment. Recent developments in terms of acquisition and display devices have shown that it is possible to offer more immersive and powerful 3D experiences by adopting higher dimensional representations. In this context, the so-called plenoptic function offers an excellent framework to analyze and discuss the recent and future developments towards improved 3D imaging representations, functionalities and applications. Since they are associated to huge amounts of data, the new imaging modalities such as light fields and point clouds critically ask for appropriate efficient coding solutions. In this context, the main objective of this paper is to present, organize and discuss the recent trends and future developments on 3D visual data representation in a plenoptic function framework. This is critical to effectively plan the next research and standardization steps on 3D imaging representation and coding.
Date of Conference: 11-15 July 2016
Date Added to IEEE Xplore: 29 August 2016
ISBN Information:
Electronic ISSN: 1945-788X
Conference Location: Seattle, WA, USA
References is not available for this document.

1. Introduction

Light plays a vital role in our daily lives while communicating with the world around us. But while the world is made of objects, these objects do not communicate their properties directly to an observer; they rather fill the space around them with a pattern of light rays that is perceived and interpreted by the human visual system. Such a pattern of light rays can be measured, yielding the now ubiquitous images and videos. Visual information plays an increasing role in our lives and evolution and it is believed that up to 50% of the human brain is involved in some way in processing visual information.

Select All
1.
T. Wiegand et al., "Overview of the H.264/AVC Video Coding Standard", IEEE T-CSVT, vol. 13, no. 7, pp. 560-576, Jul. 2003.
2.
G. J. Sullivan et al., "Overview of the High Efficiency Video Coding (HEVC) Standard", IEEE T-CSVT, vol. 22, no. 12, pp. 1649-1668, Dec. 2012.
3.
G. Tech et al., "Overview of the Multiview and 3D Extensions of High Efficiency Video Coding", IEEE T-CSVT, vol. 26, no. 1, pp. 35-49, Jan. 2016.
4.
Lytro page, [online] Available: https://www.lytro.com/.
5.
Light page, [online] Available: https://light.co/camera.
6.
Raytrix page, [online] Available: http://www.raytrix.de/.
7.
Holografika page, [online] Available: http://www.holografika.com/.
8.
E. H. Adelson and J. R. Bergen, "The Plenoptic Function and the Elements of Early Vision The MIT Press", Cambridge, 1991.
9.
S. J. Gortler et al., "The Lumigraph", Computer Graphics Proceedings, pp. 43-54, Aug. 1996.
10.
MPEG Call for Evidence on FTV Doc. MPEG2015/N15095 Warsaw Poland, Jun. 2015.
11.
JPEG JPEG PLENO Abstract and Executive Summary Doc. WG1 N6922 Sydney Australia, Feb. 2015.
12.
M. N. Do, D. M-Maillet and M. Vetterli, "On the Bandwidth of the Plenoptic Function", IEEE T-IP, vol. 21, no. 2, pp. 708-717, Feb. 2012.
13.
C. Gillian et al., "On the Spectrum of the Plenoptic Function", IEEE TIP, vol. 23, no. 2, pp. 502-516, Feb. 2014.
14.
R. Ng, "Digital Light Field Photography", Jul. 2006.
15.
D.G Dansereau, "Plenoptic Signal Processing for Robust Vision in Field Robotics", Jan. 2014.
16.
C. U. S. Edussooriya et al., "Five-Dimensional Depth-Velocity Filtering for Enhancing Moving Objects in Light Field Videos", IEEE T-SP, vol. 63, no. 8, pp. 2151-2163, Apr. 2015.
17.
V. Masselus et al., "Relighting with 4D Incident Light Fields", ACM Trans. on Graphics, vol. 22, no. 3, pp. 613-620, Jul. 2003.
18.
LIDAR, [online] Available: http://pel.physics.uwo.ca/science/lidarintro/.
19.
A. Collet et al., "High-Quality Streamable Free-Viewpoint Video", ACM Trans. on Graphics, vol. 34, no. 4, Aug. 2015.
20.
C. Zhang et al., "Point Cloud Attribute Compression with Graph Transform", Oct. 2014.
21.
C. Brites et al., "Epipolar Plane Image based Rendering for 3D Video Coding", IEEE MMSP Xiamen China, Oct. 2015.
22.
M. Magnor and B. Girod, "Data Compression for Light-Field Rendering", IEEE T-CSVT, vol. 10, no. 3, pp. 338-343, March 2000.
23.
S. C. Chan et al., "The Data Compression of Simplified Dynamic Light Fields TCASSP03", Hong Kong Apr, 2003.
24.
C. L. Chang et al., "Light Field Compression Using Disparity-Compensated Lifting and Shape Adaptation", IEEE T-IP, vol. 15, no. 4, pp. 793-806, Apr. 2006.
25.
S. Kundu, "Light Field Compression using Homography and 2D Warping", ICASSP12 Kyoto Japan, Mar. 2012.
26.
A. Aggoun et al., "Immersive 3D Holoscopic Video System", IEEE Multimedia, vol. 20, no. 1, pp. 28-37, Jan.-Mar. 2013.
27.
C. Conti et al., "New HEVC Prediction Modes for 3D Holoscopic Video Coding", IEEE ICIP 2012 Orlando USA, Oct. 2012.
28.
C. Conti et al., "Inter-Layer Prediction Scheme for Scalable 3-D Holoscopic Video Coding", IEEE SPL, vol. 20, no. 8, pp. 819-822, Aug 2013.
29.
Point Cloud Library (PCL), [online] Available: http://pointclouds.org/.
30.
J. Peng et al., "Technologies for 3D Mesh Compression: a Survey", Journal of Visual Comm. and Image Representation, vol. 16, no. 6, pp. 688-733, Dec. 2005.

References

References is not available for this document.