Deformable Generator Networks: Unsupervised Disentanglement of Appearance and Geometry | IEEE Journals & Magazine | IEEE Xplore

Deformable Generator Networks: Unsupervised Disentanglement of Appearance and Geometry


Abstract:

We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner. The appeara...Show More

Abstract:

We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner. The appearance generator network models the information related to appearance, including color, illumination, identity or category, while the geometric generator performs geometric warping, such as rotation and stretching, through generating deformation field which is used to warp the generated appearance to obtain the final image or video sequences. Two generators take independent latent vectors as input to disentangle the appearance and geometric information from image or video sequences. For video data, a nonlinear transition model is introduced to both the appearance and geometric generators to capture the dynamics over time. The proposed scheme is general and can be easily integrated into different generative models. An extensive set of qualitative and quantitative experiments shows that the appearance and geometric information can be well disentangled, and the learned geometric generator can be conveniently transferred to other image datasets that share similar structure regularity to facilitate knowledge transfer tasks.
Page(s): 1162 - 1179
Date of Publication: 04 August 2020

ISSN Information:

PubMed ID: 32749961

Funding Agency:


1 Introduction

Learning disentangled structures of the observations [1], [2] is a fundamental problem towards controlling modern deep models and understanding the world. Conceptual understanding requires a disentangled representation that separates the underlying explanatory factors and shows the important attributes of the real-world data explicitly [3], [4]. For instance, given an image dataset of human faces, a disentangled representation can separate the face’s appearance attributes, such as color, light source, identity, gender, and the geometric attributes, such as face shape and viewing angle. Such disentangled representations are semantically meaningful not only in building more transparent and interpretable generative models, but also useful for a large variety of downstream AI tasks such as transfer learning and zero-shot inference where humans excel but machines struggle [5]. It has also been shown that such disentangled representations are more generalizable and robust against adversarial attacks [6].

Contact IEEE to Subscribe

References

References is not available for this document.