完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHONG, TPen_US
dc.contributor.authorTSENG, SSen_US
dc.date.accessioned2014-12-08T15:04:06Z-
dc.date.available2014-12-08T15:04:06Z-
dc.date.issued1994-03-01en_US
dc.identifier.issn0167-8191en_US
dc.identifier.urihttp://hdl.handle.net/11536/2604-
dc.description.abstractIn [2], a parallel perceptron learning algorithm on the single-channel broadcast communication model was proposed to speed up the learning of weights of perceptrons [3]. The results in [2] showed that given n training examples, the average speedup is 1.48*n0.91/log n by n processors. Here, we explain how the parallelization may be modified so that it is applicable to any number of processors. Both analytical and experimental results show that the average speedup can reach nearly O(r) by r processors if r is much less than n.en_US
dc.language.isoen_USen_US
dc.subjectPARALLEL ALGORITHMen_US
dc.subjectBROADCAST COMMUNICATION MODELen_US
dc.subjectNEURAL NETWORKen_US
dc.subjectPERCEPTRONen_US
dc.titleAN OPTIMAL PARALLEL PERCEPTRON LEARNING ALGORITHM FOR A LARGE TRAINING SETen_US
dc.typeNoteen_US
dc.identifier.journalPARALLEL COMPUTINGen_US
dc.citation.volume20en_US
dc.citation.issue3en_US
dc.citation.spage347en_US
dc.citation.epage352en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:A1994NC86800005-
dc.citation.woscount3-
顯示於類別:期刊論文