Loading [MathJax]/extensions/MathZoom.js
Towards Multi-Pose Guided Virtual Try-On Network | IEEE Conference Publication | IEEE Xplore

Towards Multi-Pose Guided Virtual Try-On Network


Abstract:

Virtual try-on systems under arbitrary human poses have significant application potential, yet also raise extensive challenges, such as self-occlusions, heavy misalignmen...Show More

Abstract:

Virtual try-on systems under arbitrary human poses have significant application potential, yet also raise extensive challenges, such as self-occlusions, heavy misalignment among different poses, and complex clothes textures. Existing virtual try-on methods can only transfer clothes given a fixed human pose, and still show unsatisfactory performances, often failing to preserve person identity or texture details, and with limited pose diversity. This paper makes the first attempt towards a multi-pose guided virtual try-on system, which enables clothes to transfer onto a person with diverse poses. Given an input person image, a desired clothes image, and a desired pose, the proposed Multi-pose Guided Virtual Try-On Network (MG-VTON) generates a new person image after fitting the desired clothes into the person and manipulating the pose. MG-VTON is constructed with three stages: 1) a conditional human parsing network is proposed that matches both the desired pose and the desired clothes shape; 2) a deep Warping Generative Adversarial Network (Warp-GAN) that warps the desired clothes appearance into the synthesized human parsing map and alleviates the misalignment problem between the input human pose and the desired one; 3) a refinement render network recovers the texture details of clothes and removes artifacts, based on multi-pose composition masks. Extensive experiments on commonly-used datasets and our newly-collected largest virtual try-on benchmark demonstrate that our MG-VTON significantly outperforms all state-of-the-art methods both qualitatively and quantitatively, showing promising virtual try-on performances.
Date of Conference: 27 October 2019 - 02 November 2019
Date Added to IEEE Xplore: 27 February 2020
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea (South)

1. Introduction

Virtual try-on, which enables users to try on clothes to check the size or style in a virtual way, has a huge amount of commercial value and attracts extensive attention in computer vision. Many virtual try-on systems [13, 38] have been presented and achieve promising results when the pose is fixed. However, these approaches usually learn to synthesize the image conditioned on clothes only. When given a different pose, they tend to synthesize blurry images, losing most of the details and style, as shown in Figure 4.

Some results of our model. The clothes and poses images are shown in the first row, while the person images shown in the first column. The results manipulated by both clothes and pose are shown in the other columns.

Contact IEEE to Subscribe

References

References is not available for this document.