完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorLu, Tsai-Weien_US
dc.date.accessioned2017-04-21T06:48:53Z-
dc.date.available2017-04-21T06:48:53Z-
dc.date.issued2014en_US
dc.identifier.isbn978-1-4799-7129-9en_US
dc.identifier.urihttp://hdl.handle.net/11536/135881-
dc.description.abstractDeep neural network (DNN) has been widely demonstrated to achieve high performance in different speech recognition tasks. This paper focuses on the issue of model regularization in DNN acoustic model. Our idea is to compensate for the perturbations over training samples in the restricted Boltzmann machine (RBM) which is applied as a pre-training stage for unsupervised feature learning and structural modeling. We introduce the Tikhonov regularization in pre-training procedure and pursue the invariance property of objective function over the variations in input samples. This Tikhonov regularization is further combined with the regularization based on weight decay. The error function in supervised cross-entropy training is accordingly reduced. Experimental results on using RM and Aurora4 tasks show that hybrid regularization in RBM pre-training improves the training condition in DNN acoustic model and the robustness in speech recognition performance.en_US
dc.language.isoen_USen_US
dc.subjectTikhonov regularizationen_US
dc.subjectdeep neural networken_US
dc.subjectacoustic modelen_US
dc.subjectspeech recognitionen_US
dc.titleTIKHONOV REGULARIZATION FOR DEEP NEURAL NETWORK ACOUSTIC MODELINGen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2014 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY SLT 2014en_US
dc.citation.spage147en_US
dc.citation.epage152en_US
dc.contributor.department電機學院zh_TW
dc.contributor.departmentCollege of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000380375100025en_US
dc.citation.woscount1en_US
顯示於類別:會議論文