Loading [MathJax]/extensions/MathZoom.js
UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images | IEEE Conference Publication | IEEE Xplore

UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images


Abstract:

Due to the unique characteristics of underwater environments, accurate 3D reconstruction of underwater objects poses a challenging problem in tasks such as underwater exp...Show More

Abstract:

Due to the unique characteristics of underwater environments, accurate 3D reconstruction of underwater objects poses a challenging problem in tasks such as underwater exploration and mapping. Traditional methods that rely on multiple sensor data for 3D reconstruction are time-consuming and face challenges in data acquisition in underwater scenarios. We propose UW-SDF, a framework for reconstructing target objects from multi-view underwater images based on neural SDF. We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction. Additionally, to address the challenge of segmentation consistency in multi-view images, we propose a novel few-shot multi-view target segmentation strategy using the general-purpose segmentation model (SAM), enabling rapid automatic segmentation of unseen objects. Through extensive qualitative and quantitative experiments on diverse datasets, we demonstrate that our proposed method outperforms the traditional underwater 3D reconstruction method and other neural rendering approaches in the field of underwater 3D reconstruction.
Date of Conference: 14-18 October 2024
Date Added to IEEE Xplore: 25 December 2024
ISBN Information:

ISSN Information:

Conference Location: Abu Dhabi, United Arab Emirates

Funding Agency:

References is not available for this document.

Select All
1.
H. Zhang, S. Zhang, Y. Wang, Y. Liu, Y. Yang, T. Zhou, et al., "Subsea pipeline leak inspection by autonomous underwater vehicle", Applied Ocean Research, vol. 107, pp. 102321, 2021.
2.
N. Karapetyan, J. V. Johnson and I. Rekleitis, "Human diver-inspired visual navigation: Towards coverage path planning of shipwrecks", Marine Technology Society Journal, vol. 55, no. 4, pp. 24-32, 2021.
3.
M. J. Islam, R. Wang and J. Sattar, "SVAM: Saliency-guided Visual Attention Modeling by Autonomous Underwater Robots", RSS, 2022.
4.
J. Tang, Z. Chen, B. Fu, W. Lu, S. Li, X. Li, et al., "Rov6d: 6d pose estimation benchmark dataset for underwater remotely operated vehicles", IEEE RA-L, vol. 9, no. 1, pp. 65-72, 2023.
5.
N. Lyu, H. Yu, J. Han and D. Zheng, "Structured light-based underwater 3-d reconstruction techniques: A comparative study", Optics and Lasers in Engineering, vol. 161, pp. 107344, 2023.
6.
Y. Ding, Z. Chen, Y. Ji, J. Yu and J. Ye, "Light field-based underwater 3d reconstruction via angular resampling", IEEE Transactions on Computational Imaging, 2023.
7.
Y. Cong, C. Gu, T. Zhang and Y. Gao, "Underwater robot sensing technology: A survey", Fundamental Research, vol. 1, no. 3, pp. 337-345, 2021.
8.
J. McConnell, J. D. Martin and B. Englot, "Fusing concurrent orthogonal wide-aperture sonar images for dense underwater 3d reconstruction" in IROS, IEEE, pp. 1653-1660, 2020.
9.
K. Hu, Y. Zhang, C. Weng, P. Wang, Z. Deng and Y. Liu, "An underwater image enhancement algorithm based on generative adversarial network and natural image quality evaluation index", Journal of Marine Science and Engineering, vol. 9, no. 7, pp. 691, 2021.
10.
Z. Fu, W. Wang, Y. Huang, X. Ding and K.-K. Ma, "Uncertainty inspired underwater image enhancement", ECCV, pp. 465-482, 2022.
11.
B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi and R. Ng, "Nerf: Representing scenes as neural radiance fields for view synthesis", Communications of the ACM, vol. 65, no. 1, pp. 99-106, 2021.
12.
D. Levy, A. Peleg, N. Pearl, D. Rosenbaum, D. Akkaynak, S. Korman, et al., "Seathru-nerf: Neural radiance fields in scattering media", CVPR, pp. 56-65, 2023.
13.
T. Zhang and M. Johnson-Roberson, "Beyond nerf underwater: Learning neural reflectance fields for true color correction of marine imagery", IEEE RA-L, vol. 8, no. 10, pp. 6467-6474, 2023.
14.
J. J. Park, P. Florence, J. Straub, R. Newcombe and S. Lovegrove, "Deepsdf: Learning continuous signed distance functions for shape representation", CVPR, pp. 165-174, 2019.
15.
P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura and W. Wang, "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", NeurIPS, vol. 34, pp. 27 171-27 183, 2021.
16.
L. Yariv, J. Gu, Y. Kasten and Y. Lipman, "Volume rendering of neural implicit surfaces", NeurIPS, vol. 34, pp. 4805-4815, 2021.
17.
D. Paschalidou, O. Ulusoy, C. Schmitt, L. Van Gool and A. Geiger, "Raynet: Learning volumetric 3d reconstruction with ray potentials", CVPR, pp. 3897-3906, 2018.
18.
S. Tulsiani, T. Zhou, A. A. Efros and J. Malik, "Multi-view supervision for single-view reconstruction via differentiable ray consistency", CVPR, pp. 2626-2634, 2017.
19.
Z. Qi, Z. Zou, H. Chen and Z. Shi, "3d reconstruction of remote sensing mountain areas with tsdf-based neural networks", Remote Sensing, vol. 14, no. 17, pp. 4333, 2022.
20.
B. Cui, W. Tao and H. Zhao, "High-precision 3d reconstruction for small-to-medium-sized objects utilizing line-structured light scanning: A review", Remote Sensing, vol. 13, no. 21, pp. 4457, 2021.
21.
Y. Lo, H. Huang, S. Ge, Z. Wang, C. Zhang and L. Fan, "Comparison of 3d reconstruction methods: Image-based and laser-scanning-based", International Symposium on Advancement of Construction Management and Real Estate, pp. 1257-1266, 2019.
22.
J. Xiong and W. Heidrich, "In-the-wild single camera 3d reconstruction through moving water surfaces", ICCV, pp. 12 558-12 567, 2021.
23.
K. Yang, P. Han, R. Gong, M. Xiang, J. Liu, Z. Fan, et al., "High-quality 3d shape recovery from scattering scenario via deep polarization neural networks", Optics and Lasers in Engineering, vol. 173, pp. 107934, 2024.
24.
B. Cheng, I. Misra, A. G. Schwing, A. Kirillov and R. Girdhar, "Masked-attention mask transformer for universal image segmentation", CVPR, pp. 1290-1299, 2022.
25.
J. Jain, A. Singh, N. Orlov, Z. Huang, J. Li, S. Walton, et al., "Semask: Semantically masked transformers for semantic segmentation", ICCV, pp. 752-761, 2023.
26.
C. He, K. Li, Y. Zhang, G. Xu and L. Tang, "Weakly-supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping", NeurIPS, 2024.
27.
J. Schult, F. Engelmann, A. Hermans, O. Litany, S. Tang and B. Leibe, "Mask3d: Mask transformer for 3d semantic instance segmentation" in ICRA, IEEE, pp. 8216-8223, 2023.
28.
X. Wang, R. Girdhar, S. X. Yu and I. Misra, "Cut and learn for unsupervised object detection and instance segmentation", CVPR, pp. 3124-3134, 2023.
29.
M. Heo, S. Hwang, J. Hyun, H. Kim, S. W. Oh, J.-Y. Lee, et al., "A generalized framework for video instance segmentation", CVPR, pp. 14 623-14 632, 2023.
30.
L. Ke, M. Danelljan, H. Ding, Y.-W. Tai, C.-K. Tang and F. Yu, "Mask-free video instance segmentation", CVPR, pp. 22 857-22 866, 2023.
Contact IEEE to Subscribe

References

References is not available for this document.