Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, Yen-Chengen_US
dc.contributor.authorChiu, Wei-Chenen_US
dc.contributor.authorWang, Sheng-Deen_US
dc.contributor.authorWang, Yu-Chiang Franken_US
dc.date.accessioned2018-08-21T05:57:02Z-
dc.date.available2018-08-21T05:57:02Z-
dc.date.issued2017-01-01en_US
dc.identifier.issn2161-0363en_US
dc.identifier.urihttp://hdl.handle.net/11536/146958-
dc.description.abstractGenerating photo-realistic images from multiple style sketches is one of challenging tasks in image synthesis with important applications such as facial composite for suspects. While machine learning techniques have been applied for solving this problem, the requirement of collecting sketch and face photo image pairs would limit the use of the learned model for rendering sketches of different styles. In this paper, we propose a novel deep learning model of Domain-adaptive Generative Adversarial Networks (DA-GAN). The design of DA-GAN performs cross-style sketch-to-photo inversion, which mitigates the difference across input sketch styles without the need to collect a large number of sketch and face image pairs for training purposes. In experiments, we show that our method is able to produce satisfactory results as well as performing favorably against state-of-the-art approaches.en_US
dc.language.isoen_USen_US
dc.subjectImage Inversionen_US
dc.subjectDeep Learningen_US
dc.subjectConvolutional Neural Networken_US
dc.subjectGenerative Adversarial Networken_US
dc.titleDOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2017 IEEE 27TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSINGen_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000425458700076en_US
Appears in Collections:Conferences Paper