Full metadata record
DC FieldValueLanguage
dc.contributor.author江政欽en_US
dc.contributor.authorCheng-Chin Chiangen_US
dc.contributor.author傅心家en_US
dc.contributor.authorHsin-Chia Fuen_US
dc.date.accessioned2014-12-12T02:10:25Z-
dc.date.available2014-12-12T02:10:25Z-
dc.date.issued1992en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#NT810392006en_US
dc.identifier.urihttp://hdl.handle.net/11536/56732-
dc.description.abstract本論文之目的在於針對一般監督學習式類神經網路之二大問題:(一)學 習速度緩慢,以及(二)學習失敗,提出多種解決方法。在這些方法中, 我們提出了幾個新的觀念如下: ㄧ)動態式門檻(Dynamic Threshold ), 二)半局部調整特性(Semi-Locally-Tuned Property)﹐]三)最 大可能性自建式學習(Maximum Likelihood Self-Growing Learning)﹐ ]四)分割征服式學習(Divide-and-Conquer Learning)。動態式門檻可 以使類神經網路具有更強的學習能力( Learning Capability),半局部 調整特性則可有效地提高學習速度,而最大可能性自建式學習及分割征服 式學習則可保證學習之成功。根據這四個觀念,我們設計出幾個新的類神 經網路模式,包括]ㄧ)雙態式浮動門檻類神經網路(Bistate Floating- Threshold Neural Network ),]二)靜態式門檻之二次Sigmoidal類神 經網路(Static Threshold Quadratic Sigmoidal Neural Network )﹐ ]三)動態式門檻之二次Sigmoidal類神經網路(Dynamic Threshold Quadratic Sigmoidal Neural Network )﹐]四)最大可能性學習之自我 成長式類神經網路(Maximum Likelihood Learning Self-Growing Neural Network )﹐]五)分割學習之自我成長式類神經網路(Divide- and-Conquer Learning Self-Growing Neural Network )。在模擬實驗 中,這些網路模式都有較傳統的監督式學習之類神經網路更優越之表現, 如學習速度快,不錯的推論能力( Generalization Capability),以 及成功地完成學習等。文中我們也對前三種模式的學習能力作了理論分析 。此外,我們也以一環狀脈動陣列處理器(Ring Systolic Array Processor)來設計出其平行處理器。在實際應用方面,我們也以一手寫數 字辨認為例來測試,發現學習及辨認結果均相當不錯。 The objectives of this research is to propose methods on solving the two major problems of supervised-learning neural networks: (1) slow learning speed, and (2) learning failure. In these proposed mehtods, we invent the following kernel concepts on developing new supervised-learning neural models. The new concepts are (1) Dynamic Threshold, (2) Semi-Locally- Tuned Property, (3) Maximum Likelihood Self-Growing Learning, and (4) Divide-and-Conquer Learning. The dynamic threshold can be used to enhance the learning capability of a neural network. The semi-locally-tuned property can be used to improve the learning speed of a nerual network. The maximum likelihood self-growing learning and divide-and-conquer learning can be used to solve the problem of learning failure. Based on these four concepts, we proposed the following five neural models: (1) Bistate Floating-Threshold Neural Network, (2) Static Threshold Quadratic Sigmoidal Neural Network, (3) Dynamic Threshold Quadratic Sigmoidal Neural Network, (4) Maximum Likelihood Self-Growing Learning Neural Network, (5) Divide-and-Conquer Learning Neural Network. According to the simulation results, these five models are superior to the conventional supervised-learning multilayer perceptrons in terms of learning speed, generalization capability, and rate of successful learning. We also analyze the theoretical learning capability of the first three neural models. Besides, a ring systolic array processor is also designed for the parallel processing of the first three neural models. In practical applications, we also choose the application of handwritten digit recognition as an example to evaluate the performance of the last two models. The learning results and recognition results for both models are very good in comparison with the conventional supervised-learning multilayer perceptrons.zh_TW
dc.language.isoen_USen_US
dc.subject類神經網路;監督學習演算法;陣列處理器;圖型識別;函數近似zh_TW
dc.subjectneural network;supervised learning algorithm;array processor; pattern recognition;en_US
dc.title監督學習式類神經網路之研究zh_TW
dc.titleThe Study of Supervised-Learning Neural Modelsen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
Appears in Collections:Thesis