Full metadata record
DC FieldValueLanguage
dc.contributor.authorVu-Hoang Tranen_US
dc.contributor.authorHuang, Ching-Chunen_US
dc.date.accessioned2020-05-05T00:01:58Z-
dc.date.available2020-05-05T00:01:58Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-7281-4569-3en_US
dc.identifier.issn1062-922Xen_US
dc.identifier.urihttp://hdl.handle.net/11536/154026-
dc.description.abstractIn this paper, we face the challenges of unsupervised domain adaptation and propose a novel three-in-one framework where three tasks - domain adaptation, disentangled representation, and style transfer are considered simultaneously. Firstly, the learned features are disentangled into common parts and specific parts. The common parts represent the transferrable features, which are suitable for domain adaptation with less negative transfer. Conversely, the specific parts characterize the unique style of each individual domain. Based on this, the new concept of feature exchange across domains, which can not only enhance the transferability of common features but also be useful for image style transfer, is introduced. These designs allow us to introduce five types of training objectives to realize the three challenging tasks at the same time. The experimental results show that our architecture can be adaptive well to full transfer learning and partial transfer learning upon a well-learned disentangled representation. Besides, the trained network also demonstrates high potential to generate style-transferred images.en_US
dc.language.isoen_USen_US
dc.titleDomain Adaptation Meets Disentangled Representation Learning and Style Transferen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC)en_US
dc.citation.spage2998en_US
dc.citation.epage3005en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000521353903004en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper