完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | HONG, TP | en_US |
dc.contributor.author | TSENG, SS | en_US |
dc.date.accessioned | 2014-12-08T15:04:06Z | - |
dc.date.available | 2014-12-08T15:04:06Z | - |
dc.date.issued | 1994-03-01 | en_US |
dc.identifier.issn | 0167-8191 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/2604 | - |
dc.description.abstract | In [2], a parallel perceptron learning algorithm on the single-channel broadcast communication model was proposed to speed up the learning of weights of perceptrons [3]. The results in [2] showed that given n training examples, the average speedup is 1.48*n0.91/log n by n processors. Here, we explain how the parallelization may be modified so that it is applicable to any number of processors. Both analytical and experimental results show that the average speedup can reach nearly O(r) by r processors if r is much less than n. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | PARALLEL ALGORITHM | en_US |
dc.subject | BROADCAST COMMUNICATION MODEL | en_US |
dc.subject | NEURAL NETWORK | en_US |
dc.subject | PERCEPTRON | en_US |
dc.title | AN OPTIMAL PARALLEL PERCEPTRON LEARNING ALGORITHM FOR A LARGE TRAINING SET | en_US |
dc.type | Note | en_US |
dc.identifier.journal | PARALLEL COMPUTING | en_US |
dc.citation.volume | 20 | en_US |
dc.citation.issue | 3 | en_US |
dc.citation.spage | 347 | en_US |
dc.citation.epage | 352 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:A1994NC86800005 | - |
dc.citation.woscount | 3 | - |
顯示於類別: | 期刊論文 |