完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorKu, Yuan-Chuen_US
dc.contributor.authorHuang, Mou-Yueen_US
dc.date.accessioned2015-07-21T08:31:27Z-
dc.date.available2015-07-21T08:31:27Z-
dc.date.issued2014-01-01en_US
dc.identifier.isbn978-1-4799-4219-0en_US
dc.identifier.issnen_US
dc.identifier.urihttp://hdl.handle.net/11536/125004-
dc.description.abstractThis paper presents Bayesian learning for recurrent neural network language model (RNN-LM). Our goal is to regularize the RNN-LM by compensating for the randomness of the estimated model parameters which is characterized by a Gaussian prior. This model is not only constructed by training the synaptic weight parameters according to the maximum a posteriori criterion but also regularized by estimating the Gaussian hyperparameter through the type 2 maximum likelihood. However, a critical issue in Bayesian RNN-LM is the heavy computation of Hessian matrix which is formed as the sum of a large amount of outer-products of high-dimensional gradient vectors. We present a rapid approximation to reduce the redundancy due to the curse of dimensionality and speed up the calculation by summing up only the salient outer-products. Experiments on 1B-Word Benchmark, Penn Treebank and World Street Journal corpora show that rapid Bayesian RNN-LM consistently improves the perplexity and word error rate in comparison with standard RNN-LM.en_US
dc.language.isoen_USen_US
dc.subjectHessian matrixen_US
dc.subjectBayesian learningen_US
dc.subjectRecurrent neural network language modelen_US
dc.subjectspeech recognitionen_US
dc.titleRapid Bayesian Learning for Recurrent Neural Network Language Modelen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2014 9TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP)en_US
dc.citation.spage34en_US
dc.citation.epage38en_US
dc.contributor.department電機資訊學士班zh_TW
dc.contributor.departmentUndergraduate Honors Program of Electrical Engineering and Computer Scienceen_US
dc.identifier.wosnumberWOS:000349765600008en_US
dc.citation.woscount0en_US
顯示於類別:會議論文