完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Fu, HC | en_US |
dc.contributor.author | Lee, YP | en_US |
dc.contributor.author | Chiang, CC | en_US |
dc.contributor.author | Pao, HT | en_US |
dc.date.accessioned | 2014-12-08T15:44:09Z | - |
dc.date.available | 2014-12-08T15:44:09Z | - |
dc.date.issued | 2001-03-01 | en_US |
dc.identifier.issn | 1045-9227 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1109/72.914522 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/29812 | - |
dc.description.abstract | A novel modular perceptron network (MPN) and divide-and-conquer learning (DCL) schemes for the design of modular neural networks are proposed, When a training process in a multilayer perceptron fails into a local minimum or stalls in a flat region, the proposed DCL scheme is applied to divide the current training data region (e.g,, a hard to be learned training set) into two easier (hopely) to be learned regions. The learning process continues when a self-growing perceptron network and its initial weight estimation are constructed for one of the newly partitioned regions, Another partitioned region will resume the training process on the original perceptron network. Data region partitioning, weight estimating and learning are iteratively repeated until all the training data are completely learned by the MPN We have evaluated and compared the proposed MPN with several representative neural networks on the two-spirals problem and real-world dataset, The MPN achieves better weight learning performance by requiring much less data presentations (99.01% similar to 87.86% lesser) during the network training phases, and better generalization performance (4.0% better), and less processing time (2.0% similar to 81.3% lesser) during the retrieving phase, On learning the real-world data, the MPN's show less overfitting compared to single MLP. In addition, due to its self-growing and fast local learning characteristics, the modular network (MPN) can easily adapt to on-line and/or incremental Learning requirements for a rapid changing environment. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | divide-and-conquer learning | en_US |
dc.subject | modular perceptron network | en_US |
dc.subject | multilayer perceptron | en_US |
dc.subject | weight estimation | en_US |
dc.title | Divide-and-conquer learning and modular perceptron networks | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/72.914522 | en_US |
dc.identifier.journal | IEEE TRANSACTIONS ON NEURAL NETWORKS | en_US |
dc.citation.volume | 12 | en_US |
dc.citation.issue | 2 | en_US |
dc.citation.spage | 250 | en_US |
dc.citation.epage | 263 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | 管理科學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.contributor.department | Department of Management Science | en_US |
dc.identifier.wosnumber | WOS:000167886700006 | - |
dc.citation.woscount | 27 | - |
顯示於類別: | 期刊論文 |