完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHONG, TPen_US
dc.contributor.authorTSENG, SSen_US
dc.date.accessioned2014-12-08T15:05:01Z-
dc.date.available2014-12-08T15:05:01Z-
dc.date.issued1992-02-01en_US
dc.identifier.issn0167-8191en_US
dc.identifier.urihttp://hdl.handle.net/11536/3545-
dc.description.abstractA parallel perceptron learning algorithm based upon a single-channel broadcast communication model has been proposed here. Since it can process training instances in parallel, instead of one by one in the conventional algorithm, large speedup can be expected. Theoretical analysis shows: with n processors, the average speedup ranges from O(log n) to O(n) under a variety ot assumptions (where n is the number of training instances). Experimental results further show the actual average speedup is approximately being O(n0.91/log n). Extensions to a bounded number of processors and to the backpropagation learning have also been discussed.en_US
dc.language.isoen_USen_US
dc.subjectPERCEPTRONen_US
dc.subjectSEPARABLEen_US
dc.subjectPARALLEL LEARNINGen_US
dc.subjectBROADCAST COMMUNICATION MODELen_US
dc.subjectBACKPROPAGATIONen_US
dc.titlePARALLEL PERCEPTRON LEARNING ON A SINGLE-CHANNEL BROADCAST COMMUNICATION MODELen_US
dc.typeArticleen_US
dc.identifier.journalPARALLEL COMPUTINGen_US
dc.citation.volume18en_US
dc.citation.issue2en_US
dc.citation.spage133en_US
dc.citation.epage148en_US
dc.contributor.department資訊科學與工程研究所zh_TW
dc.contributor.departmentInstitute of Computer Science and Engineeringen_US
dc.identifier.wosnumberWOS:A1992HH61600002-
dc.citation.woscount3-
顯示於類別:期刊論文