標題: 三層類神經網路的改良式動態最佳學習
Revised Dynamic Optimal Training of Three-Layer Neural Network
作者: 林書帆
Shu-Fan Lin
王啟旭
電控工程研究所
關鍵字: 動態學習;倒傳遞;三層類神經網路;動態學習速率;Back propagation;Dynamic optimal training;Three layer;Neural network;Dynamic learning rate
公開日期: 2006
摘要: 本篇論文是針對三層類神經網路提出一個改良式動態最佳訓練法則,其中類神經網路的隱藏層經過一個S型激發函數,輸出層經過一個線性的激發函數。這種三層的網路可以被運用於處理分類的問題,像是蝴蝶花的品種分類。我們將對這種三層神經網路的動態最佳訓練方法提出一個完整的証明,用來說明這種動態最佳訓練方法保證神經網路能在最短的迭代次數下達到收斂的輸出結果。這種改良式動態最佳訓練方法,是在每一次的迭代過程中不斷的尋找,來取得下一次迭代過程所需要的最佳學習速率以及穩定學習速率的上限值,以保證最佳的收斂的訓練結果。經由調整初始加權值矩陣,改變激發函數,改良式動態最佳學習法則比原先的動態最佳學習法則更加省時以及更加穩定。我們可以由XOR和蝴蝶花的測試例子得到良好的結論。
This thesis proposes a revised dynamic optimal training algorithm for a three layer neural network with sigmoid activation function in the hidden layer and linear activation function in the output layer. This three layer neural network can be used for classification problems, such as the classification of Iris data. Rigorous proof has been presented for the revised dynamical optimal training process for this three layer neural network, which guarantees the convergence of the training in a minimum number of epochs. This revised dynamic optimal training finds optimal learning rate with its upper-bound for next iteration to guarantee optimal convergence of training result. With modification of initial weighting factors and activation functions, revised dynamic optimal training algorithm is more stable and faster than dynamic optimal training algorithm. Excellent improvements of computing time and robustness have been obtained for XOR and Iris data set.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT009312559
http://hdl.handle.net/11536/78244
顯示於類別:畢業論文


文件中的檔案:

  1. 255901.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。