完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLiu, Yen-Chengen_US
dc.contributor.authorYeh, Yu-Yingen_US
dc.contributor.authorFu, Tzu-Chienen_US
dc.contributor.authorWang, Sheng-Deen_US
dc.contributor.authorChiu, Wei-Chenen_US
dc.contributor.authorWang, Yu-Chiang Franken_US
dc.date.accessioned2019-04-02T06:04:35Z-
dc.date.available2019-04-02T06:04:35Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn1063-6919en_US
dc.identifier.urihttp://dx.doi.org/10.1109/CVPR.2018.00924en_US
dc.identifier.urihttp://hdl.handle.net/11536/151020-
dc.description.abstractWhile representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.en_US
dc.language.isoen_USen_US
dc.titleDetach and Adapt: Learning Cross-Domain Disentangled Deep Representationen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1109/CVPR.2018.00924en_US
dc.identifier.journal2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)en_US
dc.citation.spage8867en_US
dc.citation.epage8876en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000457843609004en_US
dc.citation.woscount0en_US
顯示於類別:會議論文