完整後設資料紀錄
DC 欄位語言
dc.contributor.author紀右益en_US
dc.contributor.author王啟旭en_US
dc.date.accessioned2014-12-12T02:28:04Z-
dc.date.available2014-12-12T02:28:04Z-
dc.date.issued2004en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT009212573en_US
dc.identifier.urihttp://hdl.handle.net/11536/68712-
dc.description.abstract本篇論文是針對三層類神經網路提出一個動態最佳訓練法則,其中網路的隱藏層和輸出層都有經過一個S型激發函數。這種三層的網路可以被運用於處理分類的問題,像是蝴蝶花的品種分類。我們將對這種三層神經網路的動態最佳訓練方法提出一個完整的証明,用來說明這種動態最佳訓練方法保證神經網路能在最短的迭代次數下達到收斂的輸出結果。這種最佳的動態訓練方法不是使用單一固定的學習速率,而是在每一次的迭代過程中不斷的更新,來取得下一次迭代過程所需要的最佳學習速率,以保證最佳的收斂的訓練結果。我們可以由XOR和蝴蝶花的測試例子得到良好的結論。zh_TW
dc.description.abstractThis thesis proposes a dynamical optimal training algorithm for a three layer neural network with sigmoid activation functions in the hidden and output layers. This three layer neural network can be used for classification problems, such as the classification of Iris data. Rigorous proof has been presented for the dynamical optimal training process for this three layer neural network, which guarantees the convergence of the training in a minimum number of epochs. This dynamical optimal training does not use fixed learning rate for training. Instead, the learning rates are updated for next iteration to guarantee the optimal convergence of the training result. Excellent results have been obtained for XOR and Iris data set.en_US
dc.language.isoen_USen_US
dc.subject動態zh_TW
dc.subject倒傳遞演算法zh_TW
dc.subjectDynamicen_US
dc.subjectOptimal Trainingen_US
dc.subjectback-propagation algorithmen_US
dc.title三層類神經網路的動態最佳學習zh_TW
dc.titleDynamic Optimal Training of A Three Layer Neural Network with Sigmoid Functionen_US
dc.typeThesisen_US
dc.contributor.department電控工程研究所zh_TW
顯示於類別:畢業論文


文件中的檔案:

  1. 257301.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。