完整後設資料紀錄
DC 欄位語言
dc.contributor.authorTsou, Kai-Weien_US
dc.contributor.authorChien, Jen-Tzungen_US
dc.date.accessioned2018-08-21T05:57:01Z-
dc.date.available2018-08-21T05:57:01Z-
dc.date.issued2017-01-01en_US
dc.identifier.issn2161-0363en_US
dc.identifier.urihttp://hdl.handle.net/11536/146954-
dc.description.abstractRecurrent neural network (RNN) based on long short-term memory (LSTM) has been successfully developed for single-channel source separation. Temporal information is learned by using dynamic states which are evolved through time and stored as an internal memory. The performance of source separation is constrained due to the limitation of internal memory which could not sufficiently preserve long-term characteristics from different sources. This study deals with this limitation by incorporating an external memory in RNN and accordingly presents a memory augmented neural network for source separation. In particular, we carry out a neural Turing machine to learn a separation model for sequential signals of speech and noise in presence of different speakers and noise types. Experiments show that speech enhancement based on memory augmented neural network consistently outperforms that using deep neural network and LSTM in terms of short-term objective intelligibility measure.en_US
dc.language.isoen_USen_US
dc.subjectLong short-term memoryen_US
dc.subjectmemory augmented neural networken_US
dc.subjectmonaural source separationen_US
dc.titleMEMORY AUGMENTED NEURAL NETWORK FOR SOURCE SEPARATIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2017 IEEE 27TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSINGen_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000425458700015en_US
顯示於類別:會議論文