1. Introduction
Image compositing is an essential task in image editing that aims to insert an object from a given image into another image in a realistic way. Conventionally, many sub-tasks are involved in compositing an object to a new scene, including color harmonization [6], [7], [19], [51], relighting [52], and shadow generation [16], [29], [43] in order to naturally blend the object into the new image. As shown in Tab. 1, most previous methods [6], [7], [16], [19], [28], [43] focus on a single sub-task required for image compositing. Consequently, they must be appropriately combined to obtain a composite image where the input object is re-synthesized to have the color, lighting and shadow that is consistent with the background scene. As shown in Fig. 1, results produced in this way still look unnatural, partly due to the viewpoint of the inserted object being different from the overall background. Prior works only focus on one or two aspects of object compositing, and they cannot synthesize novel views. In contrast, our model can address all perspectives as listed.
Method | Geometry | Light | Shadow | View |
---|---|---|---|---|
ST-GAN [ ] | I | |||
SSH [19] | ||||
DCCF [ ] | X | |||
SSN [ ] | ||||
SGRNet [ ] | ||||
GCC-GAN [5] | ||||
Ours |