Full metadata record
DC FieldValueLanguage
dc.contributor.author涂宏成en_US
dc.contributor.authorHung-Cheng Tuen_US
dc.contributor.author林育平en_US
dc.contributor.authorProf. Yu-Ping Linen_US
dc.date.accessioned2014-12-12T02:11:49Z-
dc.date.available2014-12-12T02:11:49Z-
dc.date.issued1993en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#NT820327059en_US
dc.identifier.urihttp://hdl.handle.net/11536/57777-
dc.description.abstract近幾年來,類神經網路已經變成研究的主要領域。其之所以如此,主要是 因為類神經網路具有學習的能力,而其學習的能力是藉由學習法則來表現 。大部份的學習法則都是以梯度下降法為基礎,而梯度下降法的觀念是對 誤差函數取微分,所以導致其掉入區域最小值,無法到達全域最小值。相 對地,隨機尋優技巧不需要對誤差函數取微分,所以其不會有區域最小值 的問題。因此我們可利用隨機尋優技巧來尋找具有多形能量函數之類神經 網路的全域最小值。本論文的主旨是將隨機尋優技巧應用於具有多形能量 函數之類神經網上及利用隨機尋優技巧來改善類神經網路的性能。而且我 們會將隨機尋優技巧與誤差回授(back-propagation)的方法做一個比較。 最後我們說明隨機尋優技巧可用來解決多形能量最佳化的問題。 Neural network has become a very active area of research. Most researches are interested in the learning ability of neural network. Learning of neural network is specified by learning algorithm. Many learning algorithms have been developed. Most of them are based on the gradient descent method which exploited the derivatives of the error function. Therefore, they can not always find the global optimum in the case of a multi-modal error function. They sometimes fall into a local minimum of the error function. However, the random optimization method does not use the derivatives of the error function. Hence the global optimum can be found by the random optimization method. The main objective of this thesis is to apply random search techniques to various actual neural networks which are multi- modal. We improve the performance of the neural network using the common learning algorithm by utilizing random search techniques. Finally we compare the random search techniques to the conventional technique (e.g. back-propagation) in global optimization. In this thesis we investigated the ability of optimization of various methods (including back-propagation and random search techniques). First we briefly reviewed several random search techniques. In addition, simulation results indicate that random search techniques can be used to solve multi-modal optimization problem (e.g. function approximation and patterns classification).zh_TW
dc.language.isoen_USen_US
dc.subject隨機尋優技巧;全域最小值;誤差表面;誤差回饋演算法zh_TW
dc.subjectrandom search techniques; global minimum; error surface; back- propagation algorithmen_US
dc.title隨機尋優技巧應用於類神經網路學習之研究zh_TW
dc.titleRandom Search Techniques for Complex Neural Network Learningen_US
dc.typeThesisen_US
dc.contributor.department電控工程研究所zh_TW
Appears in Collections:Thesis