Loading [MathJax]/extensions/MathMenu.js
Dr.Bokeh: DiffeRentiable Occlusion-Aware Bokeh Rendering | IEEE Conference Publication | IEEE Xplore

Dr.Bokeh: DiffeRentiable Occlusion-Aware Bokeh Rendering


Abstract:

Bokeh is widely used in photography to draw attention to the subject while effectively isolating distractions in the background. Computational methods can simulate bokeh ...Show More

Abstract:

Bokeh is widely used in photography to draw attention to the subject while effectively isolating distractions in the background. Computational methods can simulate bokeh effects without relying on a physical camera lens, but the inaccurate lens modeling in existing filtering-based meth-ods leads to artifacts that need post-processing or learning-based methods to fix. We propose Dr.Bokeh, a novel ren-dering method that addresses the issue by directly correcting the defect that violates physics in the current filtering-based bokeh rendering equation. Dr.Bokeh first preprocesses the input RGBD to obtain a layered scene representation. Dr.Bokeh then takes the layered representation and user-defined lens parameters to render photo-realistic lens blur based on the novel occlusion-aware bokeh rendering method. Experiments show that the non-learning based renderer Dr.Bokeh outperforms state-of-the-art bokeh ren-dering algorithms in terms of photo-realism. In addition, extensive quantitative and qualitative evaluations show that the more accurate lens model pushes the limit of depth-from-defocus.
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA
References is not available for this document.

1. Introduction

Bokeh is a physical effect produced by a camera lens system. It refers to the shape and quality of out-of-focus areas in an image. Bokeh brings focus to the in-focus subject and enhances the overall aesthetic quality of the image.

Being occlusion-aware, Dr.Bokeh renders realistic bokeh effects from the bokeh rendering process without post-processing. Compared with the scattering/gathering-based method SteReFo and learning-based method BokehMe, Dr.Bokeh renders natural partial occlusion (red parts). MPIB learns to render a partial occlusion effect but breaks on unseen data (blue parts). Dr.Bokeh is more robust than learning-based methods given the same inputs because the rendering process is physically grounded. Best viewed by zooming in.

Artifacts by inaccurate lens model: color bleeding and partial occlusion are two main artifacts introduced by current inaccurate lens model. Color bleeding means the pixels in the out-of-focus scatter to in-focus regions. Partial occlusion is a semi-transparent effect on the out-of-focus boundary regions, where part of the backgrounds are visible in the background in-focus case. Best viewed by zoom-in.

Select All
1.
Hadi Alzayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen Lou, Jia-Bin Huang, et al., "Dc2: Dual-camera defocus control by learning to refocus", Proceedings of the IEEEICVF Conference on Computer Vision and Pattern Recognition, pp. 21488-21497, 2023.
2.
Benjamin Busam, Matthieu Hog, Steven McDonagh and Gregory Slabaugh, "SteReFo: Efficient Image Refocusing with Stereo Vision", 2019 IEEEICVF International Conference on Computer Vision Workshop (ICCVW), pp. 3295-3304, 2019.
3.
Antonio Criminisi, Patrick Perez and Kentaro Toyama, "Object removal by exemplar-based inpainting", 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2003. Proceedings., pp. II-II, 2003.
4.
SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruder-man, Andrei A Rusu, Ivo Danihelka, Karol Gregor et al., "Neural scene representation and rendering", Science, vol. 360, no. 6394, pp. 1204-1210, 2018.
5.
Linus Franke, Nikolai Hofmann, Marc Stamminger and Kai Selgrad, "Multi-layer depth of field rendering with tiled splat-ting", Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 1, no. 1, pp. 1-17, 2018.
6.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, et al., "Generative adversarial networks", Commu-nications of the ACM, vol. 63, no. 11, pp. 139-144, 2020.
7.
Jhonny Goransson and Andreas Karlsson, "Practical post-process depth of field", GPU Gems, pp. 583-606, 2007.
8.
Shir Gur and Lior Wolf, "Single Image Depth Estimation Trained via Depth From Defocus Cues", 2019 IEEEICVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7675-7684, 2019.
9.
James Hays and Alexei A Efros, "Scene completion using millions of photographs", ACM Transactions on Graphics (ToG), vol. 26, no. 3, 2007.
10.
Liu He and Daniel Aliaga, "Globalmapper: Arbitrary-shaped urban layout generation", Proceedings of the IEEEICVF International Conference on Computer Vision, pp. 454-464, 2023.
11.
Liu He, Yijuan Lu, John Corring, Dinei Florencio and Cha Zhang, "Diffusion-based document layout generation", Document Analysis and Recognition-ICDAR 2023: 17th International Conference San Jose CA USA August 21–26 2023 Proceedings Part I, pp. 361-378, 2023.
12.
Liu He, Jie Shan and Daniel Aliaga, "Generative building feature estimation from satellite images", IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1-13, 2023.
13.
Andrey Ignatov, Jagruti Patel and Radu Timofte, "Rendering natural camera bokeh effect with deep learning", Proceedings of the IEEEICVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 418-419, 2020.
14.
Andrey Ignatov, Radu Timofte, Ming Qian, Congyu Qiao, Jiamin Lin, Zhenyu Guo, Chenghua Li, Cong Leng, Jian Cheng, Juewen Peng et al., "Aim 2020 challenge on rendering realistic bokeh", European Conference on Computer Vision, pp. 213-228, 2020.
15.
Satoshi Iizuka, Edgar Simo-Serra and Hiroshi Ishikawa, "Globally and locally consistent image completion", ACM Transactions on Graphics (ToG), vol. 36, no. 4, pp. 1-14, 2017.
16.
Wenzel Jakob, Sebastien Speierer, Nicolas Roussel and De-lio Vicini, "Dr. jit: a just-in-time compiler for differentiable rendering", ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1-19, 2022.
17.
Yuna Jeong, Seung Youp Baek, Yechan Seok, Gi Beom Lee and Sungkil Lee, "Real-time dynamic bokeh rendering with efficient look-up table sampling", IEEE Transactions on Visu-alization and Computer Graphics, vol. 28, no. 2, pp. 1373-1384, 2020.
18.
Nima Khademi Kalantari, Ting-Chun Wang and Ravi Ra-mamoorthi, "Learning-based view synthesis for light field cameras", ACM Trans. Graph., vol. 35, no. 6, pp. 1-10, 2016.
19.
Takuhiro Kaneko, "Unsupervised Learning of Depth and Depth-of-Field Effect from Natural Images with Aperture Rendering Generative Adversarial Networks", 2021 IEEEICVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15674-15683, 2021.
20.
Michael Kass, Aaron Lefohn and John D Owens, Interactive depth of field using simulated diffusion on a gpu., 2006.
21.
Hiroharu Kato, Deniz Beker, Mihai Morariu, Takahiro Ando, Toru Matsuoka, Wadim Kehl, et al., Differentiable rendering: A survey., 2020.
22.
M. Kraus and M. Strengert, "Depth-of-Field Ren-dering by Pyramidal Image Processing", Computer Graphics Forum, vol. 26, no. 3, pp. 645-654, 2007.
23.
Sungkil Lee, Gerard Jounghyun Kim and Seungmoon Choi, "Real-time depth-of-field rendering using point splatting on per-pixel layers" in Computer Graphics Forum, Wiley Online Library, pp. 1955-1962, 2008.
24.
Sungkil Lee, Elmar Eisemann and Hans-Peter Seidel, "Depth-of-field rendering with multiview synthesis", ACM Transactions on Graphics (TOG), vol. 28, no. 5, pp. 1-6, 2009.
25.
Sungkil Lee, Elmar Eisemann and Hans-Peter Seidel, "Real-time lens blur effects and focus control", ACM Trans. Graph., vol. 29, no. 4, pp. 1-7, 2010.
26.
Kefei Lei and John F Hughes, "Approximate depth of field effects using few samples per pixel", Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 119-128, 2013.
27.
Jizhizi Li, Jing Zhang and Dacheng Tao, Deep automatic natural image matting., 2021.
28.
Tzu-Mao Li, Miika Aittala, Fredo Durand and Jaakko Lehti-nen, "Differentiable monte carlo ray tracing through edge sampling", ACM Transactions on Graphics (TOG), vol. 37, no. 6, pp. 1-11, 2018.
29.
Lu Ling, Yichen Sheng, Zhi Tu, Wentian Zhao, Cheng Xin, Kun Wan, Lantao Yu, Qianyu Guo, Zixun Yu, Yawen Lu et al., DI3dv-10k: A large-scale scene dataset for deep learning-based 3d vision., 2023.
30.
Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao and Bryan Catanzaro, "Image inpainting for ir-regular holes using partial convolutions", Proceedings of the European conference on computer vision (ECCV), pp. 85-100, 2018.
Contact IEEE to Subscribe

References

References is not available for this document.