完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorChiang, Cheng-Chunen_US
dc.date.accessioned2014-12-08T15:30:53Z-
dc.date.available2014-12-08T15:30:53Z-
dc.date.issued2012en_US
dc.identifier.isbn978-1-62276-759-5en_US
dc.identifier.urihttp://hdl.handle.net/11536/22056-
dc.description.abstractThis paper presents the group sparse hidden Markov models (GS-HMMs) where a sequence of acoustic features is driven by Markov chain and each feature vector is represented by two groups of basis vectors. The group of common bases represents the features across states within a HMM. The group of individual bases compensates the intra-state residual information. Importantly, the sparse prior for sensing weights is controlled by the Laplacian scale mixture (LSM) distribution which is obtained by multiplying Laplacian variable with an inverse Gamma variable. The scale mixture parameter in LSM makes the distribution even sparser. This parameter serves as an automatic relevance determination for selecting the relevant bases from two groups. The weights and two sets of bases in GS-HMMs are estimated via Bayesian learning. We apply this framework for acoustic modeling and show the robustness of GS-HMMs for speech recognition in presence of different noises types and SNRs.en_US
dc.language.isoen_USen_US
dc.subjectBayesian learningen_US
dc.subjectgroup sparsityen_US
dc.subjecthidden Markov modelen_US
dc.subjectspeech recognitionen_US
dc.titleGroup Sparse Hidden Markov Models for Speech Recognitionen_US
dc.typeProceedings Paperen_US
dc.identifier.journal13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3en_US
dc.citation.spage2645en_US
dc.citation.epage2648en_US
dc.contributor.department電機資訊學士班zh_TW
dc.contributor.departmentUndergraduate Honors Program of Electrical Engineering and Computer Scienceen_US
dc.identifier.wosnumberWOS:000320827201216-
顯示於類別:會議論文