Full metadata record
DC FieldValueLanguage
dc.contributor.authorZHANG, CNen_US
dc.contributor.authorWANG, Men_US
dc.contributor.authorTSENG, CCen_US
dc.date.accessioned2014-12-08T15:03:37Z-
dc.date.available2014-12-08T15:03:37Z-
dc.date.issued1995en_US
dc.identifier.issn0941-0643en_US
dc.identifier.urihttp://hdl.handle.net/11536/2157-
dc.identifier.urihttp://dx.doi.org/10.1007/BF01414076en_US
dc.description.abstractIn this work we propose two techniques for improving VLSI implementations for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (addition and multiplication) and another to evaluate the activation function, the total time and cost for the VLSI array implementation of ANNs can be decreased by a factor of two compared with previous work. Taking the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS-based array processor designs are proposed. The first is built by look-up tables, and the second is constructed by binary adders accomplished by the mixed-radix conversion (MRC), such that the hardwares are simple and high speed operations are obtained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.en_US
dc.language.isoen_USen_US
dc.subjectMIXED-RADIX CONVERSIONen_US
dc.subjectNEURAL NETWORKen_US
dc.subjectPARALLEL PROCESSINGen_US
dc.subjectRESIDUE NUMBER SYSTEMen_US
dc.subjectSYSTOLIC ARRAYen_US
dc.titleRESIDUE SYSTOLIC IMPLEMENTATIONS FOR NEURAL NETWORKSen_US
dc.typeArticleen_US
dc.identifier.doi10.1007/BF01414076en_US
dc.identifier.journalNEURAL COMPUTING & APPLICATIONSen_US
dc.citation.volume3en_US
dc.citation.issue3en_US
dc.citation.spage149en_US
dc.citation.epage156en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:A1995RU60100004-
dc.citation.woscount1-
Appears in Collections:Articles