標題: 以電阻式記憶體突觸實現雙層感知器硬體與設計考量
Hardware Implementation and Design Consideration of Two-layer Perceptron using RRAM Synapse
作者: 沈于琳
侯拓宏
Shen, Yu-Lin
Hou, Tuo-Hung
電子研究所
關鍵字: 多層倒傳遞神經網路;類比電阻式記憶體;multi-layer back-propagation;analog resistive random access memory
公開日期: 2017
摘要: 本論文探討實現即時學習(online-trained)多層倒傳遞硬體神經網路(multi- layer back-propagation hardware neural network)的方法與在影像辨識之應用。目前利用軟體運算的類神經網路應用如影像、語音辨識等已逐漸成熟,但隨著資料量的大幅增加,軟體運算所需的高能耗可能造成其發展之障礙,目前有許多研究團隊已著手進行硬體神經網路的研究。在硬體神經網路中最關鍵的電子突觸元件(synaptic device)部分,類比電阻式記憶體(analog resistive random-access memory, RRAM)是最佳的候選人,因其結構簡單,可以達成高密度陣列;且其可調的阻值特性,可以簡化演算法的運算設計。在本論文所實現的硬體神經網路架構中,包括以leaky integrate-and-fire電路作為神經元、電阻式記憶體作為突觸,決定神經元之間的聯結強弱,而其強弱趨勢則由FPGA依照倒傳遞演算法以梯度下降方式(gradient descent method)進行更新。本研究有效結合元件特性量測與建模、電路板設計、演算法模擬,系統整合測試等跨域能力,驗證一元件到系統的完整設計流程。   我們透過倒傳遞演算法中各項參數調整,配合既有電阻式記憶體元件的模型,可有效減少類比電阻式記憶體非理想特性以及電路誤差對辨識結果的影響。最後在本研究中,我們實現了兩層的倒傳遞神經網路的硬體架構,並展現正確的辨識成果。值得一提的是本研究中的兩層倒傳遞神經網路硬體因其複雜性,是第一次被成功實現,相信這將成為未來更深層的仿生運算研究與應用之重要參考。
This thesis discusses the implementation of online-trained multilayer back-propagation hardware neural network and its application in image recognition. At present, applications of software-based neural network such as image and speech recognition have gradually turned mature. However, because of the substantial increase in the amount of data, software computing required high energy consumption, which may become the major development obstacle. There are numerous research efforts have been dedicated to energy-efficient hardware neural networks in order to solve this issue. Analog resistive random-access memory (RRAM) is the best candidate for the most important electronic synaptic device in the hardware neural network. Because of its simple structure, RRAM-based synaptic device can be integrated into high density arrays. Moreover, its adjustable resistance values can simplify the algorithm design. In this thesis, the hardware network architecture includes leaky integrate-and-fire (LIF) circuits as neurons and RRAM as synapses to present the connecting strength between neurons. The conductance of RRAM is updated according to the gradient descent method implemented using a FPGA. This study accomplished a complete design flow from single components to the complete system by combining several cross-layer design capabilities, such as component characteristic measurement and modeling, printed circuit board design, algorithm simulation, and system test platform. By adjusting the parameters in the algorithm, we can effectively reduce the non-ideal characteristics of the analog RRAM device and circuit variations on the final image classification result. Finally, we implemented two-layer back-propagation neural network hardware and show correct multi-class classification results. It is worth mentioning that this study shows the first demonstration of two-layer back-propagation neural network hardware, which is known to be difficult to realize because of the complexity. We believe that this study could serve as an important reference for the future research and applications of deep neuromorphic computing.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070480118
http://hdl.handle.net/11536/142588
顯示於類別:畢業論文