完整後設資料紀錄
DC 欄位語言
dc.contributor.authorWang, Hui-Poen_US
dc.contributor.authorKo, Wei-Janen_US
dc.contributor.authorPeng, Wen-Hsiaoen_US
dc.date.accessioned2019-08-02T02:24:15Z-
dc.date.available2019-08-02T02:24:15Z-
dc.date.issued2018-01-01en_US
dc.identifier.isbn978-9-8814-7685-2en_US
dc.identifier.issn2309-9402en_US
dc.identifier.urihttp://hdl.handle.net/11536/152427-
dc.description.abstractMost deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders ( AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.en_US
dc.language.isoen_USen_US
dc.titleLearning Priors for Adversarial Autoencodersen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)en_US
dc.citation.spage1388en_US
dc.citation.epage1396en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000468383400224en_US
dc.citation.woscount0en_US
顯示於類別:會議論文