完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorChen, Ching-Huaien_US
dc.date.accessioned2017-04-21T06:48:59Z-
dc.date.available2017-04-21T06:48:59Z-
dc.date.issued2016en_US
dc.identifier.isbn978-1-4799-9988-0en_US
dc.identifier.issn1520-6149en_US
dc.identifier.urihttp://hdl.handle.net/11536/136364-
dc.description.abstractThis paper presents a new non-linear dimensionality reduction with stochastic neighbor embedding. A deep neural network is developed for discriminative manifold learning where the class information in transformed low-dimensional space is preserved. Importantly, the objective function for deep manifold learning is formed as the Kullback-Leibler divergence between the probability measures of the labeled samples in high-dimensional and low-dimensional spaces. Different from conventional methods, the derived objective does not require the empirically-tuned parameter. This objective is optimized to attractive those samples from the same class to be close together and simultaneously impose those samples from different classes to be far apart. In the experiments on image and audio tasks, we illustrate the effectiveness of the proposed discriminative manifold learning in terms of visualization and classification performance.en_US
dc.language.isoen_USen_US
dc.subjectManifold learningen_US
dc.subjectdeep neural networken_US
dc.subjectdiscriminative learningen_US
dc.subjectpattern classificationen_US
dc.titleDEEP DISCRIMINATIVE MANIFOLD LEARNINGen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGSen_US
dc.citation.spage2672en_US
dc.citation.epage2676en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000388373402163en_US
dc.citation.woscount0en_US
顯示於類別:會議論文