完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorMisbullah, Alimen_US
dc.date.accessioned2018-08-21T05:56:50Z-
dc.date.available2018-08-21T05:56:50Z-
dc.date.issued2016-01-01en_US
dc.identifier.urihttp://hdl.handle.net/11536/146706-
dc.description.abstractSpeech recognition has been significantly improved by applying acoustic models based on deep neural network which could be realized as the feedforward NN (FNN) or the recurrent NN (RNN). In general, FNN is feasible to project the observations onto a deep invariant feature space while RNN is beneficial to capture the temporal information in a sequential data for speech recognition. RNN based on long short-term memory (LSTM) is capable of storing inputs over a long time period and thus exploiting a self-learned mechanism for long-range temporal context. Considering the complimentary FNN and RNN in their modeling capabilities, this paper presents a deep model which is constructed by stacking LSTM and FNN. Through the cascade of LSTM cells and fully-connected feedforward units, we explore the temporal patterns and summarize the long history of previous inputs in a deep learning machine. The experiments on 3rd CHiME challenge and Aurora-4 show that the stacks of hybrid model with FNN post-processor outperform stand-alone FNN and LSTM and the other hybrid models for robust speech recognition.en_US
dc.language.isoen_USen_US
dc.subjectspeech recognitionen_US
dc.subjectacoustic modelingen_US
dc.subjecthybrid neural networken_US
dc.subjectlong short-term memoryen_US
dc.titleDeep Long Short-Term Memory Networks for Speech Recognitionen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP)en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000405610900013en_US
顯示於類別:會議論文