完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorKu, Yuan-Chuen_US
dc.date.accessioned2017-04-21T06:48:53Z-
dc.date.available2017-04-21T06:48:53Z-
dc.date.issued2014en_US
dc.identifier.isbn978-1-4799-7129-9en_US
dc.identifier.urihttp://hdl.handle.net/11536/135882-
dc.description.abstractThis paper presents a Bayesian approach to construct the recurrent neural network language model (RNN-LM) for speech recognition. Our idea is to regularize the RNN-LM by compensating the uncertainty of the estimated model parameters which is represented by a Gaussian prior. The objective function in Bayesian RNN (BRNN) is formed as the regularized cross entropy error function. The regularized model is not only constructed by training the regularized parameters according to the maximum a posteriori criterion but also estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to Hessian matrix is developed by selecting a small set of salient outer-products and illustrated to be effective for BRNN-LM. BRNN-LM achieves sparser model than RNN-LM. Experiments on different corpora show promising improvement by applying BRNN-LM using different amount of training data.en_US
dc.language.isoen_USen_US
dc.subjectRecurrent neural networken_US
dc.subjectlanguage modelen_US
dc.subjectBayesian learningen_US
dc.subjectHessian matrixen_US
dc.titleBAYESIAN RECURRENT NEURAL NETWORK LANGUAGE MODELen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2014 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY SLT 2014en_US
dc.citation.spage206en_US
dc.citation.epage211en_US
dc.contributor.department電機學院zh_TW
dc.contributor.departmentCollege of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000380375100035en_US
dc.citation.woscount0en_US
顯示於類別:會議論文