Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | HONG, TP | en_US |
dc.contributor.author | TSENG, SS | en_US |
dc.date.accessioned | 2014-12-08T15:05:01Z | - |
dc.date.available | 2014-12-08T15:05:01Z | - |
dc.date.issued | 1992-02-01 | en_US |
dc.identifier.issn | 0167-8191 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/3545 | - |
dc.description.abstract | A parallel perceptron learning algorithm based upon a single-channel broadcast communication model has been proposed here. Since it can process training instances in parallel, instead of one by one in the conventional algorithm, large speedup can be expected. Theoretical analysis shows: with n processors, the average speedup ranges from O(log n) to O(n) under a variety ot assumptions (where n is the number of training instances). Experimental results further show the actual average speedup is approximately being O(n0.91/log n). Extensions to a bounded number of processors and to the backpropagation learning have also been discussed. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | PERCEPTRON | en_US |
dc.subject | SEPARABLE | en_US |
dc.subject | PARALLEL LEARNING | en_US |
dc.subject | BROADCAST COMMUNICATION MODEL | en_US |
dc.subject | BACKPROPAGATION | en_US |
dc.title | PARALLEL PERCEPTRON LEARNING ON A SINGLE-CHANNEL BROADCAST COMMUNICATION MODEL | en_US |
dc.type | Article | en_US |
dc.identifier.journal | PARALLEL COMPUTING | en_US |
dc.citation.volume | 18 | en_US |
dc.citation.issue | 2 | en_US |
dc.citation.spage | 133 | en_US |
dc.citation.epage | 148 | en_US |
dc.contributor.department | 資訊科學與工程研究所 | zh_TW |
dc.contributor.department | Institute of Computer Science and Engineering | en_US |
dc.identifier.wosnumber | WOS:A1992HH61600002 | - |
dc.citation.woscount | 3 | - |
Appears in Collections: | Articles |