Loading [MathJax]/extensions/MathMenu.js
Image Completion Using Efficient Belief Propagation Via Priority Scheduling and Dynamic Pruning | IEEE Journals & Magazine | IEEE Xplore

Image Completion Using Efficient Belief Propagation Via Priority Scheduling and Dynamic Pruning


Abstract:

In this paper, a new exemplar-based framework is presented, which treats image completion, texture synthesis, and image inpainting in a unified manner. In order to be abl...Show More

Abstract:

In this paper, a new exemplar-based framework is presented, which treats image completion, texture synthesis, and image inpainting in a unified manner. In order to be able to avoid the occurrence of visually inconsistent results, we pose all of the above image-editing tasks in the form of a discrete global optimization problem. The objective function of this problem is always well-defined, and corresponds to the energy of a discrete Markov random field (MRF). For efficiently optimizing this MRF, a novel optimization scheme, called priority belief propagation (BP), is then proposed, which carries two very important extensions over the standard BP algorithm: ldquopriority-based message schedulingrdquo and ldquodynamic label pruning.rdquo These two extensions work in cooperation to deal with the intolerable computational cost of BP, which is caused by the huge number of labels associated with our MRF. Moreover, both of our extensions are generic, since they do not rely on the use of domain-specific prior knowledge. They can, therefore, be applied to any MRF, i.e., to a very wide class of problems in image processing and computer vision, thus managing to resolve what is currently considered as one major limitation of the BP algorithm: its inefficiency in handling MRFs with very large discrete state spaces. Experimental results on a wide variety of input images are presented, which demonstrate the effectiveness of our image-completion framework for tasks such as object removal, texture synthesis, text removal, and image inpainting.
Published in: IEEE Transactions on Image Processing ( Volume: 16, Issue: 11, November 2007)
Page(s): 2649 - 2661
Date of Publication: 15 October 2007

ISSN Information:

PubMed ID: 17990742
References is not available for this document.

I. Introduction

The problem of image completion can be loosely defined as follows: given an image which is incomplete, i.e., it has missing regions (e.g., see Fig. 1), try to fill its missing parts in such a way that a visually plausible outcome is obtained at the end. Although stating the image completion problem is very simple, the task of actually trying to successfully solve it, is far from being a trivial thing to achieve. Ideally, any algorithm that is designed to solve the image completion problem should have the following characteristics:

it should be able to successfully complete complex natural images;

it should also be able to handle incomplete images with (possibly) large missing parts;

all these should take place in a fully automatic manner, i.e., without intervention from the user.

Also, ideally, we would like any image completion algorithm to be able to handle the related problem of texture synthesis, as well. According to that problem, given a small texture as input, we are then asked to generate an arbitrarily large output texture, which maintains the visual characteristics of the input [e.g., see Fig. 2(a)]. It is exactly due to all of the above requirements that image completion is, in general, a very challenging problem. Nevertheless, it can be very useful in many areas, e.g., it can be important for computer graphics applications, image editing, film postproduction, image restoration, etc.

Object removal is just one of the many cases where image completion needs to be applied. In the specific example shown, the user wants to remove a person from the input image on the left. He, therefore, simply marks a region around that person and that region must then be filled automatically so that a visually plausible outcome is obtained.

(a) Texture synthesis problem. (b) The three main approaches to image completion.

Select All
1.
J. Portilla and E. P. Simoncelli, "A parametric texture model based on joint statistics ofcomplex wavelet coefficients", Int. J. Comput. Vis., vol. 40, no. 1, pp. 49-70, 2000.
2.
D. J. Heeger and J. R. Bergen, "Pyramid-based texture analysis/synthesis", Proc. SIGGRAPH, pp. 229-238, 1995.
3.
S. Soatto, G. Doretto and Y. N. Wu, "Dynamic textures", Proc. Int. Conf. Computer Vision, pp. 439-446, 2001.
4.
A. W. Fitzgibbon, "Stochastic rigidity: Image registration for nowhere-staticscenes", Proc. Int. Conf. Computer Vision, pp. 662-669, 2001.
5.
M. Szummer and R. W. Picard, "Temporal texture modeling", Proc. Int. Conf. Image Processing, pp. 823-826, 1996.
6.
G. Doretto and S. Soatto, "Editable dynamic textures", Comput. Vis. Pattern Recognit., pp. 137-142, 2003.
7.
M. Bertalmio, G. Sapiro, V. Caselles and C. Ballester, "Image inpainting", Proc. SIGGRAPH, pp. 417-424, 2000.
8.
M. Bertalmio, A. L. Bertozzi and G. Sapiro, "NavierStokes fluiddynamics and image and video inpainting", Comput. Vis. Pattern Recognit., pp. 355-362, 2001.
9.
C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro and J. Verdera, "Filling-in by joint interpolation of vectorfields and gray levels", IEEE Trans. Image Process., vol. 10, no. 8, pp. 1200-1211, Aug. 2001.
10.
M. Bertalmio, L. A. Vese, G. Sapiro and S. Osher, "Simultaneous structure and texture imageinpainting", Comput. Vis. Pattern Recognit., pp. 707-712, 2003.
11.
T. Chan and J. Shen, "Non-textureinpaintings by curvature-driven diffusions", J. Vis. Commun. Image Rep., vol. 12, no. 4, pp. 436-449, 2001.
12.
A. A. Efros and T. K. Leung, "Texture synthesis by non-parametric sampling", Proc. Int. Conf. Computer Vision, pp. 1033-1038, 1999.
13.
L.-Y. Wei and M. Levoy, "Fast texture synthesis using tree-structured vector quantization", Proc. SIGGRAPH, pp. 479-488, 2000.
14.
J. S. D. Bonet, "Multiresolution sampling procedure for analysis and synthesisof texture images", Proc. SIGGRAPH, pp. 361-368, 1997.
15.
V. Kwatra, A. Schodl, I. Essa, G. Turk and A. Bobick, "Graphcut textures: Image andvideo synthesis using graph cuts", Proc. SIGGRAPH, pp. 277-286, 2003.
16.
L. Liang, C. Liu, Y.-Q. Xu, B. Guo and H.-Y. Shum, "Real-time texture synthesisby patch-based sampling", ACM Trans. Graph., vol. 20, no. 3, pp. 127-150, 2001.
17.
Q. Wu and Y. Yu, "Featurematching and deformation for texture synthesis", ACM Trans. Graph., vol. 23, no. 3, pp. 364-367, 2004.
18.
M. Ashikhmin, "Synthesizing natural textures", Proc. Symp. Interactive 3D Graphics, pp. 217-226, 2001.
19.
A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless and D. H. Salesin, "Image analogies", Proc. SIGGRAPH, pp. 327-340, 2001.
20.
A. A. Efros and W. T. Freeman, "Image quilting for texture synthesis and transfer", Proc. SIGGRAPH, pp. 341-346, 2001.
21.
A. Schdl, R. Szeliski, D. H. Salesin and I. Essa, "Video textures", Proc. SIGGRAPH, pp. 489-498, 2000.
22.
J. Sun, L. Yuan, J. Jia and H.-Y. Shum, "Image completion with structurepropagation", Proc. SIGGRAPH, pp. 861-868, 2005.
23.
I. Drori, D. Cohen-Or and H. Yeshurun, "Fragment-basedimage completion", Proc. SIGGRAPH, pp. 303-312, 2003.
24.
J. Jia and C.-K. Tang, "Imagerepairing: Robust image synthesis by adaptive ND tensor voting", Comput. Vis. Pattern Recognit., pp. 643-650, 2003.
25.
Y. Wexler, E. Shechtman and M. Irani, "Space-time video completion", Comput. Vis. Pattern Recognit., pp. 120-127, 2004.
26.
V. Kwatra, I. Essa, A. Bobick and N. Kwatra, "Textureoptimization for example-based synthesis", Proc. SIGGRAPH, pp. 795-802, 2005.
27.
A. Criminisi, P. Prez and K. Toyama, "Object removal by exemplar-based inpainting", Comput. Vis. Pattern Recognit., pp. 721-728, 2003.
28.
N. Komodakis and G. Tziritas, "Image completion using globaloptimization", Comput. Vis. Pattern Recognit., pp. 442-452, 2006.
29.
W. T. Freeman, E. C. Pasztor and O. T. Carmichael, "Learning low-level vision", Int. J. Comput. Vis., vol. 40, no. 1, pp. 25-47, Oct. 2000.
30.
P. F. Felzenszwalb and D. P. Huttenlocher, "Efficient belief propagation for early vision", Comput. Vis. Pattern Recognit., pp. 261-268, 2004.

Contact IEEE to Subscribe

References

References is not available for this document.