Full metadata record
DC FieldValueLanguage
dc.contributor.authorChang, Chih-Chengen_US
dc.contributor.authorLiu, Jen-Chiehen_US
dc.contributor.authorShen, Yu-Linen_US
dc.contributor.authorChou, Teyuhen_US
dc.contributor.authorChen, Pin-Chunen_US
dc.contributor.authorWang, I-Tingen_US
dc.contributor.authorSu, Chih-Chunen_US
dc.contributor.authorWu, Ming-Hongen_US
dc.contributor.authorHudec, Borisen_US
dc.contributor.authorChang, Che-Chiaen_US
dc.contributor.authorTsai, Chia-Mingen_US
dc.contributor.authorChang, Tian-Sheuanen_US
dc.contributor.authorWong, H-S Philipen_US
dc.contributor.authorHou, Tuo-Hungen_US
dc.date.accessioned2018-08-21T05:56:59Z-
dc.date.available2018-08-21T05:56:59Z-
dc.date.issued2017-01-01en_US
dc.identifier.issn2380-9248en_US
dc.identifier.urihttp://hdl.handle.net/11536/146907-
dc.description.abstractThis paper highlights the feasible routes of using resistive memory (RRAM) for accelerating online training of deep neural networks (DNNs). A high degree of asymmetric nonlinearity in analog RRAMs could be tolerated when weight update algorithms are optimized with reduced training noise. Hybrid-weight Net (HW-Net), a modified multilayer perceptron (MLP) algorithm that utilizes hybrid internal analog and external binary weights is also proposed. Highly accurate online training could be realized using simple binary RRAMs that have already been widely developed as digital memory.en_US
dc.language.isoen_USen_US
dc.titleChallenges and Opportunities toward Online Training Acceleration using RRAM-based Hardware Neural Networken_US
dc.typeProceedings Paperen_US
dc.identifier.journal2017 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM)en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000424868900067en_US
Appears in Collections:Conferences Paper