Loading [MathJax]/extensions/MathZoom.js
ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning | IEEE Conference Publication | IEEE Xplore

ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning


Abstract:

Although many recent works have investigated generalizable NeRF-based novel view synthesis for unseen scenes, they seldom consider the synthetic-to-real generalization, w...Show More

Abstract:

Although many recent works have investigated generalizable NeRF-based novel view synthesis for unseen scenes, they seldom consider the synthetic-to-real generalization, which is desired in many practical applications. In this work, we first investigate the effects of synthetic data in synthetic-to-real novel view synthesis and surprisingly observe that models trained with synthetic data tend to produce sharper but less accurate volume densities. For pixels where the volume densities are correct, fine-grained details will be obtained. Otherwise, severe artifacts will be produced. To maintain the advantages of using synthetic data while avoiding its negative effects, we propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints. Meanwhile, we adopt cross-view attention to further enhance the geometry perception of features by querying features across input views. Experiments demonstrate that under the synthetic-to-real setting, our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS. When trained on real data, our method also achieves state-of-the-art results. https://haoy945.github.io/contranerf/
Date of Conference: 17-24 June 2023
Date Added to IEEE Xplore: 22 August 2023
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

Funding Agency:


1. Introduction

Novel view synthesis is a classical problem in computer vision, which aims to produce photorealistic images for unseen viewpoints [2], [5], [10], [36], [40]. Recently, Neural Radiance Fields (NeRF) [25] proposes to achieve novel view synthesis through continuous scene modeling through a neural net- work, which quickly attracts widespread attention due to its surprising results. However, the vanilla NeRF is actu-ally designed to fit the continuous 5D radiance field of a given scene, which often fails to generalize to new scenes and datasets. How to improve the generalization ability of neural scene representation is a challenging problem.

Contact IEEE to Subscribe

References

References is not available for this document.