標題: 應用類比電阻仿生神經突觸之硬體神經網路系統實現
Implementation of Hardware Neural Network System Based on Analog Resistive Artificial Synapse
作者: 周德玉
侯拓宏
Chou, Teyuh
Hou, Tuo-Hung
電子研究所
關鍵字: 硬體神經網路系統;電阻式記憶體;仿生神經突觸;Hardware Neural Network System;Resistive-switching Random Access Memory;Artificial Synapse
公開日期: 2016
摘要: 電腦數位運算的需求,即便達到元件微縮極限依然無法滿足。中央處理器及記憶體科技進步快速,可以處理複雜的運算工作。然而現行范紐曼架構面臨運算效率及能源消耗的問題,類神經網路的仿生運算方式被提出來突破范紐曼瓶頸,且被視為是一個充滿潛力的下世代運算模式。 電阻式記憶體展示了類似大腦神經系統中重要神經突觸的特性。然而以電阻式記憶體為基礎的神經系統缺乏完全平行運算的硬體實現。硬體神經網路研究目前相當熱絡,由於它能夠提供即時的運算。在這篇論文中,建立了使用電阻式記憶體突觸特性的贏者全得模型,且模擬非理想效應以評估電阻式記憶體為基礎的神經網路穩建性。此外,這個模型被轉移到應用類比電阻仿生神經突觸之真實系統實現。系統包含leaky integrate-and-fire神經元以及使用電阻式記憶體Spike-Timing-Dependent-Plasticity (STDP)的寫入控制單元,學習算法則是由FPGA產生的控制訊號執行。 在成功實現應用電阻式記憶體的單層神經網路後,我們對於可以處理更多複雜工作的多層網路產生興趣。我們成功建立單層與多層的監督式倒傳遞學習算法之硬體神經網路系統,使用梯度下降法更新類比電阻仿生神經突觸之突觸權重。 在這篇論文中使用電阻式記憶體突觸與非監督式贏者全拿及監督式倒傳遞兩種學習算法,說明了應用電阻式記憶體之硬體神經網路的優點及潛力。我們相信這將會幫助未來更深層的仿生運算研究與應用。
Demands on even more abundant computing capacity could not be completely satisfied even though the Si technology is reaching its scaling limitation. The current von Neumann architecture comprising CPU and memory is powerful for processing many complex computational tasks, but is facing issues of computing efficiency and excessive power consumption. Neuromorphic computing is proposed to overcome the von Neumann bottleneck and viewed as one of the promising candidates of the next-generation computing paradigms. Important synaptic characteristics similar to the neural system in brains has been demonstrated in Resistive-switching Random Access Memory (RRAM). The RRAM-based hardware neural network is actively researched to perform real-time learning tasks. However, there is still a lack of fully-parallel hardware implementation. In this thesis, the winner-take-all algorithm considering RRAM synaptic characteristics is constructed and the non-ideal effects of the device are simulated to evaluate the robustness of the RRAM-based neural network. Furthermore, the simulated model is transformed into real system implementation by using analog resistive artificial synapses. The system also includes the leaky integrate-and-fire neuron, the Spike-Timing-Dependent-Plasticity (STDP) weight update scheme, and control signals generated from FPGA for performing the learning algorithm. Beyond the successful RRAM-based single-layer neural network implementation, multilayer neural networks that could process more complicated tasks are of great interests. The back-propagation learning algorithm using the gradient descent method to update the synaptic weights of analog resistive artificial synapses is established for both single-layer and multilayer hardware neural network systems. The unsupervised winner-take-all and supervised back-propagation hardware neural networks based on RRAM synapses presented in the thesis illustrate the advantages and promising potential of RRAM-based hardware neural network. We believe this study would benefit the further research and applications of neuromorphic computing.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070251808
http://hdl.handle.net/11536/140349
顯示於類別:畢業論文