完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHsieh, Chia-Weien_US
dc.contributor.authorChen, Chieh-Yunen_US
dc.contributor.authorChou, Chien-Lungen_US
dc.contributor.authorShuai, Hong-Hanen_US
dc.contributor.authorCheng, Wen-Huangen_US
dc.date.accessioned2020-05-05T00:02:00Z-
dc.date.available2020-05-05T00:02:00Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-5386-6249-6en_US
dc.identifier.issn1522-4880en_US
dc.identifier.urihttp://hdl.handle.net/11536/154054-
dc.description.abstractThe image-based virtual try-on system has raised research attention recently, but it still requires to upload an image of a user with the target pose. We present a novel learning model, Fit-Me network, to seamlessly fit in-shop clothing into a person image and simultaneously transform the pose of the person image to another given one. The proposed Fit-Me network helps users not only save the time used to change clothes physically but also provide comprehensive information about how suitable the clothes are. By facilitating the arbitrary pose transformation, we can generate consecutive poses to help users get more information for deciding whether to buy the clothes or not from different aspects.en_US
dc.language.isoen_USen_US
dc.subjectVirtual try-onen_US
dc.subjectpose transformationen_US
dc.subjectimage synthesisen_US
dc.titleFIT-ME: IMAGE-BASED VIRTUAL TRY-ON WITH ARBITRARY POSESen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)en_US
dc.citation.spage4694en_US
dc.citation.epage4698en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000521828604156en_US
dc.citation.woscount0en_US
顯示於類別:會議論文