Loading [MathJax]/extensions/MathMenu.js
ScanNeRF: a Scalable Benchmark for Neural Radiance Fields | IEEE Conference Publication | IEEE Xplore

ScanNeRF: a Scalable Benchmark for Neural Radiance Fields


Abstract:

In this paper, we propose the first-ever real benchmark thought for evaluating Neural Radiance Fields (NeRFs) and, in general, Neural Rendering (NR) frameworks. We design...Show More

Abstract:

In this paper, we propose the first-ever real benchmark thought for evaluating Neural Radiance Fields (NeRFs) and, in general, Neural Rendering (NR) frameworks. We design and implement an effective pipeline for scanning real objects in quantity and effortlessly. Our scan station is built with less than 500$ hardware budget and can collect roughly 4000 images of a scanned object in just 5 minutes. Such a platform is used to build ScanNeRF, a dataset characterized by several train/val/test splits aimed at benchmarking the performance of modern NeRF methods under different conditions. Accordingly, we evaluate three cuttingedge NeRF variants on it to highlight their strengths and weaknesses. The dataset is available on our project page, together with an online benchmark to foster the development of better and better NeRFs.
Date of Conference: 02-07 January 2023
Date Added to IEEE Xplore: 06 February 2023
ISBN Information:

ISSN Information:

Conference Location: Waikoloa, HI, USA
References is not available for this document.

1. Introduction

What is the Metaverseƒ Stephenson coined this portmanteau in his novel Snow Crash, hypothesizing that in the 21st century humans, thanks to goggles, would be immersed in virtual worlds mixed with real ones. And here we are! At the time, however, the technology to realize the Metaverse was still hypothetical, but today Cross Reality (XR or Extended Reality) is a fact. XR is made up of a multitude of technologies and variants, such as Virtual Reality and Augmented Reality, but they all share a single paradigm: seamless interaction between virtual environments, digital objects and people. That is the Metaverse! But it does not exist yet, and all that is Digital is often only a virtual representation of the real world. How much will it cost us, then, to transport all our real world into the virtual oneƒ

Select All
1.
Henrik Aanæs, Rasmus Ramsbøl Jensen, George Vogiatzis, Engin Tola and Anders Bjorholm Dahl, "Large-scale data for multiple-view stereopsis", International Journal of Computer Vision, vol. 120, no. 2, pp. 153-168, 2016.
2.
Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht and Angjoo Kanazawa, "Plenoxels: Radiance fields without neural networks", 2021.
3.
Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla and Pratul P. Srinivasan, "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", ICCV 2021.
4.
Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu and Hendrik P. A. Lensch, "Nerd: Neural reflectance decomposition from image collections", ICCV, 2021.
5.
Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu and Gordon Wetzstein, "Pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis", CVPR 2021.
6.
Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, et al., "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo", ICCV 2021.
7.
Julian Chibane, Aayush Bansal, Verica Lazova and Gerard Pons-Moll, "Stereo radiance fields (srf): Learning view synthesis from sparse views of novel scenes", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun 2021.
8.
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser and Matthias Nießner, "Scannet: Richly-annotated 3d reconstructions of indoor scenes", Proc. Computer Vision and Pattern Recognition (CVPR), 2017.
9.
Kangle Deng, Andrew Liu, Jun-Yan Zhu and Deva Ramanan, "Depth-supervised nerf: Fewer views and faster training for free", arxiv CS.CV 2107.02791, 2021.
10.
John Flynn, Michael Broxton, Paul E. Debevec, Matthew DuVall, Graham Fyffe, Ryan S. Overbeck, et al., "Deepview: View synthesis with learned gradient descent", CVPR, 2019.
11.
Guy Gafni, Justus Thies, Michael Zollhofer and Matthias¨ Nießner, "Dynamic neural radiance fields for monocular 4d facial avatar reconstruction", CVPR, 2021.
12.
Chen Gao, Ayush Saraf, Johannes Kopf and Jia-Bin Huang, "Dynamic view synthesis from dynamic monocular video", ICCV 2021.
13.
Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton and Julien P. C. Valentin, "Fastnerf: Highfidelity neural rendering at 200fps", ICCV, 2021.
14.
Tong He, John P. Collomosse, Hailin Jin and Stefano Soatto, "Deepvoxels++: Enhancing the fidelity of novel view synthesis from 3d voxel embeddings", ACCV 2020.
15.
Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron and Paul E. Debevec, "Baking neural radiance fields for real-time view synthesis", ICCV, 2021.
16.
Arno Knapitsch, Jaesik Park, Qian-Yi Zhou and Vladlen Koltun, "Tanks and temples: benchmarking large-scale scene reconstruction", ACM Trans. Graph., 2017.
17.
Adam R. Kosiorek, Heiko Strathmann, Daniel Zoran and Pol Moreno, "Rosalia Schneider Sona Mokra and´ Danilo Jimenez Rezende. Nerf-vae: A geometry aware 3d scene generative model", ICML, 2021.
18.
Zhengqi Li, Simon Niklaus, Noah Snavely and Oliver Wang, "Neural scene flow fields for space-time view synthesis of dynamic scenes", CVPR, 2021.
19.
Zhengqi Li, Wenqi Xian, Abe Davis and Noah Snavely, "Crowdsampling the plenoptic function", ECCV, 2020.
20.
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua and Christian Theobalt, "Neural sparse voxel fields", NeurIPS, 2020.
21.
Yuan Liu, Sida Peng, Lingjie Liu, Qianqian Wang, Peng Wang, Christian Theobalt, et al., "Neural rays for occlusion-aware image-based rendering", arxiv CS.CV 2107.13421, 2021.
22.
Stephen Lombardi, Tomas Simon, Jason M. Saragih, Gabriel Schwartz and Andreas M. Lehrmann, "and Yaser Sheikh. Neural volumes: learning dynamic renderable volumes from images", ACM Trans. Graph., 2019.
23.
Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy and Daniel Duckworth, "Nerf in the wild: Neural radiance fields for unconstrained photo collections", CVPR, 2021.
24.
Nelson L. Max, "Optical models for direct volume rendering", IEEE Trans. Vis. Comput. Graph, 1995.
25.
Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, et al., "Local light field fusion: practical view synthesis with prescriptive sampling guidelines", ACM Trans. Graph., 2019.
26.
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi and Ren Ng, "Nerf: Representing scenes as neural radiance fields for view synthesis", ECCV, 2020.
27.
Carsten Moenning and Neil A Dodgson, "Fast marching farthest point sampling for implicit surfaces and point clouds", Computer Laboratory Technical Report, vol. 565, pp. 1-12, 2003.
28.
Thomas Muller, Alex Evans, Christoph Schied and Alexan-¨ der Keller, "Instant neural graphics primitives with a multiresolution hash encoding", arXiv:2201.05989, Jan. 2022.
29.
Atsuhiro Noguchi, Xiao Sun, Stephen Lin and Tatsuya Harada, "Neural articulated radiance field", ICCV 2021.
30.
Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, et al., "Deformable neural radiance fields", ICCV 2021.

Contact IEEE to Subscribe

References

References is not available for this document.