Full metadata record
DC FieldValueLanguage
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorLu, Tsai-Weien_US
dc.date.accessioned2017-04-21T06:50:19Z-
dc.date.available2017-04-21T06:50:19Z-
dc.date.issued2015en_US
dc.identifier.isbn978-1-4673-6997-8en_US
dc.identifier.issn1520-6149en_US
dc.identifier.urihttp://hdl.handle.net/11536/135716-
dc.description.abstractThis paper presents a deep recurrent regularization neural network (DRRNN) for speech recognition. Our idea is to build a regularization neural network acoustic model by conducting the hybrid Tikhonov and weight-decay regularization which compensates the variations due to the input speech as well as the model parameters in the restricted Boltzmann machine as a pre-training stage for feature learning and structural modeling. In addition, a new backpropagation through time (BPTT) algorithm is developed by extending the truncated minibatch training for recurrent neural network where the minibatch BPTT is not only performed in recurrent layer but also in feedforward layer. The DRRNN acoustic model is accordingly established to capture the temporal correlation in a regularization neural network. Experimental results on the tasks of RM and Aurora4 show the effectiveness and robustness of using DRRNN for speech recognition.en_US
dc.language.isoen_USen_US
dc.subjectRecurrent neural networken_US
dc.subjectmodel regularizationen_US
dc.subjectdeep learningen_US
dc.subjectacoustic modelen_US
dc.titleDEEP RECURRENT REGULARIZATION NEURAL NETWORK FOR SPEECH RECOGNITIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP)en_US
dc.citation.spage4560en_US
dc.citation.epage4564en_US
dc.contributor.department電機學院zh_TW
dc.contributor.departmentCollege of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000368452404140en_US
dc.citation.woscount1en_US
Appears in Collections:Conferences Paper