標題: | 以FPGA為核心之倒傳遞類神經網路硬體實現 Implementation of FPGA-Based Back-Propagation Artificial Neural Network |
作者: | 劉澤翰 Liu, Tse-Han 陳永平 Chen, Yon-Ping 電控工程研究所 |
關鍵字: | 類神經網路;硬體;Artificial Neural Network;Hardware |
公開日期: | 2010 |
摘要: | 本篇論文主要目標在研發具有學習能力的類神經網路硬體架構,所使用之平台為Altera DE2-70 FPGA,其操作頻率為50MHz。此架構提供應用與學習兩種操作模式,分別用來執行類神經網路之運算與網路權值之學習。在應用模式方面,本論文採取單層多工(layer multiplexing)的方法,藉由重複使用單一神經層來達成多層的運算,此外利用片段線性的方式來實現對數S型(log-sigmoid)之活化函數,以減少資源使用且加快硬體速度。而在學習模式方面,以離線方式(off-line)執行倒傳遞演算學習法則來學習網路權值,本論文使用逐次學習(incremental learning)的權值疊代法來降低演算法的複雜度,並達到簡化設計和減少資源使用量的目的。整個硬體架構除了利用24-bits 精準度的定點數格式來進行運算外,還透過參數的調整來改變類神經的架構,如神經元數或隱藏層數。最後將此硬體架構應用於M-G曲線預測與影像邊緣偵測,並獲得成功的實驗結果。 This thesis presents a hardware design of artificial neural network for learning on the Altera DE2-70 FPGA board with 50 MHz operation frequency. There are two modes developed for users, application and training, to execute the calculation of neural network and to learn the weights. For the application mode, the multilayer architecture is realized by the layer multiplexing, reusing a single layer. Besides, the activation function, log-sigmoid, is approximated by the PWL method, to reduce the resource and speed up the operation. As for the learning mode, the off-line training for back-propagation algorithm is adopted to adjust the weights. Based on sequential architecture, the design complexity and resource requirement is further reduced. The data format adopts 24-bits fixed-point and the structure of ANN could be reconfigured by the parameters concerning the number of neuron or hidden-layer. The success of the hardware architecture is demonstrated by the experiment results of neural network applied to M-G curve prediction and image edge detection. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079812610 http://hdl.handle.net/11536/46964 |
Appears in Collections: | Thesis |