標題: FashionOn: Semantic-guided Image-based Virtual Try-on with Detailed Human and Clothing Information
作者: Hsieh, Chia-Wei
Chen, Chieh-Yun
Chou, Chien-Lung
Shuai, Hong-Han
Liu, Jiaying
Cheng, Wen-Huang
交大名義發表
National Chiao Tung University
關鍵字: Virtual try-on;image synthesis;pose transformation;semantic-guided learning
公開日期: 1-一月-2019
摘要: The image-based virtual try-on system has attracted a lot of research attention. The virtual try-on task is challenging since synthesizing try-on images involves the estimation of 3D transformation from 2D images, which is an ill-posed problem. Therefore, most of the previous virtual try-on systems cannot solve difficult cases, e.g., body occlusions, wrinkles of clothes, and details of the hair. Moreover, the existing systems require the users to upload the image for the target pose, which is not user-friendly. In this paper, we aim to resolve the above challenges by proposing a novel FashionOn network to synthesize user images fitting different clothes in arbitrary poses to provide comprehensive information about how suitable the clothes are. Specifically, given a user image, an in-shop clothing image, and a target pose (can be arbitrarily manipulated by joint points), FashionOn learns to synthesize the try-on images by three important stages: pose-guided parsing translation, segmentation region coloring, and salient region refinement. Extensive experiments demonstrate that FashionOn maintains the details of clothing information (e.g., logo, pleat, lace), as well as resolves the body occlusion problem, and thus achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.
URI: http://dx.doi.org/10.1145/3343031.3351075
http://hdl.handle.net/11536/153842
ISBN: 978-1-4503-6889-6
DOI: 10.1145/3343031.3351075
期刊: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19)
起始頁: 275
結束頁: 283
顯示於類別:會議論文