完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 黃文鐸 | en_US |
dc.contributor.author | Wen-Dwo Hwang | en_US |
dc.contributor.author | 曾建超;周文光 | en_US |
dc.contributor.author | C. C. Tseng;W. K. Chou | en_US |
dc.date.accessioned | 2014-12-12T02:11:51Z | - |
dc.date.available | 2014-12-12T02:11:51Z | - |
dc.date.issued | 1993 | en_US |
dc.identifier.uri | http://140.113.39.130/cdrfb3/record/nctu/#NT820392005 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/57807 | - |
dc.description.abstract | 類神經網路(neural network)一般運用在圖形辨識(pattern recogni tion)和影像處理(image processing) 上, 類神經網路之所以受到廣泛的 使用不僅是因為其適合許多種不同的用途, 而且類神經網路具有推理思考 和記憶的特性。但是訓練的時間過長一直是類神經網路的缺點, 所以在直 覺上, 有兩種方法可以改善學習速度, 一種是用有效率的學習方法( learn ing algorithm)和較好的網路模型(neural model), 另一種則是減 少訓練資料的數量。而此篇論文得主要目的在探討如何減少訓練資料的數 量。我們可以用選擇方法 (selection algorithm)來篩選訓練資料, 經過 篩選後的訓練資料我們稱之為具有代表性的資料 (typical point), 這些 資料是原來訓練資料的子集合; 我們將篩選後的訓練資料輸入給類神經網 路學習, 而不是用原來的資料去學習, 我們相信類神經的學習速度會改善 不少。在這篇論文中我們提出兩種選擇方法來改善類神經網路的學習, 第 一種是以狄努尼三角化(Delaunay triangulation)為基礎, 利用鄰近點的 串列(adjacent points list)來做篩選; 另一種也同樣以狄努尼三角化為 基礎, 利用alpha shape 方法來做篩選。兩種方法都在個人電腦(PC)上用 C語言寫成, 我們用倒傳遞網路(back-propagation)來測試這些篩選過的 訓練資料, 這個網路是目前較為常用的網路, 而且是用NWORK 這套軟體來 模擬的。當然我們提出的方法是可以適用於其它監督式的學習方法和類神 經網路模型。我們可以從實驗結果中發現我們的選擇方法確實改善了學習 的速度,而且不會犧牲掉正確性; 換句話說, 將類神經網路用篩選過的資 料訓練好 , 在測試階段的時候, 原來的訓練資料可以被這訓練好的網路 正確的分辨出來。另外, 篩選過的資料在訓練所花費的時間遠低於用原來 的資料去訓練的時間。 The neural networks are commonly used in pattern recognition and image processing. The popularity of neural networks not only lies in the extensive applications of them but also results from the significant properties of them. But neural network is suffered from its longer training time. Intuitively, if we want to reduce the training time, it's important to develop a better learning algorithms or to minimize the amount of training data. However, this research concentrates on minimizing the amount of the training data. We can use selection algorithms to reduce the amount of the training data. The reduced training data which are the subset of the original training data are called typical points. We feed the typical points into neural networks during learning phase instead of feeding whole original training data into neural networks. Since the number of data is minimized, the learning speed should be improved. In this thesis, we introduce the selection algorithms to improve the performance of neural network learning. Here, we present two selection algorithms : the first one is using the adjacency point list based on Delaunay triangulation, the other one applies the alpha-shape algorithm. The two algorithms are implemented on PC using C. Those selected training data have been tested on the most popular neural network model, back-propagation, which is simulated by NWORKS. Though only the back-propagation network has been tested, the results can be extended to any learning algorithm and any neural model. We will show the experimental results to prove the selection algorithms indeed improve the learning speed. Results indicated that the correctness was at least as good as traditional learning methods. And the training time cost was shorter than original training time without selecting data. | zh_TW |
dc.language.iso | en_US | en_US |
dc.subject | 選擇方法;狄努尼三角化 | zh_TW |
dc.subject | selection algorithms;Delaunay triangulation;alpha shape | en_US |
dc.title | 類神經網路中訓練資料的篩選 | zh_TW |
dc.title | Selection Algorithm for Neural Network Learning | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 資訊科學與工程研究所 | zh_TW |
顯示於類別: | 畢業論文 |