標題: Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
作者: Gao, Ligang
Wang, I-Ting
Chen, Pai-Yu
Vrudhula, Sarma
Seo, Jae-Sun
Cao, Yu
Hou, Tuo-Hung
Yu, Shimeng
電子工程學系及電子研究所
Department of Electronics Engineering and Institute of Electronics
關鍵字: resistive switching;neuro-inspired computing;cross-point array;synaptic device;online learning;weight update;weighted sum
公開日期: 13-十一月-2015
摘要: A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaOx/TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30x speed-up and >30x improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve similar to 95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
URI: http://dx.doi.org/10.1088/0957-4484/26/45/455204
http://hdl.handle.net/11536/128372
ISSN: 0957-4484
DOI: 10.1088/0957-4484/26/45/455204
期刊: NANOTECHNOLOGY
Volume: 26
Issue: 45
顯示於類別:期刊論文