Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Yen-Cheng | en_US |
dc.contributor.author | Chiu, Wei-Chen | en_US |
dc.contributor.author | Wang, Sheng-De | en_US |
dc.contributor.author | Wang, Yu-Chiang Frank | en_US |
dc.date.accessioned | 2018-08-21T05:57:02Z | - |
dc.date.available | 2018-08-21T05:57:02Z | - |
dc.date.issued | 2017-01-01 | en_US |
dc.identifier.issn | 2161-0363 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/146958 | - |
dc.description.abstract | Generating photo-realistic images from multiple style sketches is one of challenging tasks in image synthesis with important applications such as facial composite for suspects. While machine learning techniques have been applied for solving this problem, the requirement of collecting sketch and face photo image pairs would limit the use of the learned model for rendering sketches of different styles. In this paper, we propose a novel deep learning model of Domain-adaptive Generative Adversarial Networks (DA-GAN). The design of DA-GAN performs cross-style sketch-to-photo inversion, which mitigates the difference across input sketch styles without the need to collect a large number of sketch and face image pairs for training purposes. In experiments, we show that our method is able to produce satisfactory results as well as performing favorably against state-of-the-art approaches. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Image Inversion | en_US |
dc.subject | Deep Learning | en_US |
dc.subject | Convolutional Neural Network | en_US |
dc.subject | Generative Adversarial Network | en_US |
dc.title | DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2017 IEEE 27TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING | en_US |
dc.contributor.department | 交大名義發表 | zh_TW |
dc.contributor.department | National Chiao Tung University | en_US |
dc.identifier.wosnumber | WOS:000425458700076 | en_US |
Appears in Collections: | Conferences Paper |