The paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation of clothing and occlusion caused by hair or poses, which leads to loss of details in the try-on results. In this paper, the authors present a details-oriented virtual try-on network (DO-VTON), which allows synthesizing high-fidelity try-on images with preserved characteristics of target clothing.
The proposed try-on network consists of three modules. The fashion parsing module (FPM) is designed to generate the parsing map of a reference person image. The geometric matching module (GMM) warps the input clothing and matches it with the torso area of the reference person guided by the parsing map. The try-on module (TOM) generates the final try-on image. In both FPM and TOM, attention mechanism is introduced to obtain sufficient features, which enhances the performance of characteristics preservation. In GMM, a two-stage coarse-to-fine training strategy with a grid regularization loss (GR loss) is employed to optimize the clothing warping.
In this paper, the authors propose a three-stage image-based virtual try-on network, DO-VTON, that aims to generate realistic try-on images with extensive characteristics preserved.
The authors’ proposed algorithm can provide a promising tool for image based virtual try-on.
The authors’ proposed method is a technology for consumers to purchase favored clothes online and to reduce the return rate in e-commerce.
Therefore, the authors’ proposed algorithm can provide a promising tool for image based virtual try-on.
Luo, W. and Zhong, Y. (2023), "DO-VTON: a details-oriented virtual try-on network", International Journal of Clothing Science and Technology, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/IJCST-02-2022-0017
Emerald Publishing Limited
Copyright © 2022, Emerald Publishing Limited