Full metadata record
DC FieldValueLanguage
dc.contributor.authorChang, Pao-Chungen_US
dc.contributor.authorChen, Sin-Horngen_US
dc.contributor.authorJuang, Biing-Hwangen_US
dc.date.accessioned2014-12-08T15:04:28Z-
dc.date.available2014-12-08T15:04:28Z-
dc.date.issued1993-07-01en_US
dc.identifier.issn1063-6676en_US
dc.identifier.urihttp://dx.doi.org/10.1109/89.232616en_US
dc.identifier.urihttp://hdl.handle.net/11536/2962-
dc.description.abstractIn a traditional speech recognition system, the distance score between a test token and a reference pattern is obtained by simply averaging the distortion sequence resulted from matching of the two patterns through a dynamic programming procedure. The final decision is made by choosing the one with the minimal average distance score. If we view the distortion sequence as a form of observed features, a decision rule based on a specific discriminant function designed for the distortion sequence obviously will perform better than that based on the simple average distortion. We, therefore, suggest in this paper a linear discriminant function of the form Delta = Sigma(T)(i=1) omega(i) * d(i) to compute the distance score A instead of a direct average Delta = 1/T Sigma(T)(i=1) d(i). Several adaptive algorithms are proposed to learn the discriminant weighting function in this paper. These include one heuristic method, two methods based on the error propagation algorithm [1], [2], and one method based on the generalized Probabilistic descent (GPD) algorithm [3]. We study these methods in a speaker-independent speech recognition task involving utterances of the highly confusible English E-set (b, c, d, e, g, p, t, v, z). The results show that the best performance is obtained by using the GPD method which achieved a 78.1% accuracy, compared to 67.6% with the traditional unweighted average method. Besides the experimental comparisons, an analytical discussion of various training algorithms is also provided.en_US
dc.language.isoen_USen_US
dc.titleDiscriminative Analysis of Distortion Sequences in Speech Recognitionen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/89.232616en_US
dc.identifier.journalIEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSINGen_US
dc.citation.volume1en_US
dc.citation.issue3en_US
dc.citation.spage326en_US
dc.citation.epage333en_US
dc.contributor.department電信工程研究所zh_TW
dc.contributor.departmentInstitute of Communications Engineeringen_US
dc.identifier.wosnumberWOS:000207078600007-
dc.citation.woscount6-
Appears in Collections:Articles


Files in This Item:

  1. 000207078600007.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.