完整後設資料紀錄
DC 欄位語言
dc.contributor.author洪宗貝en_US
dc.contributor.authorHong, Tzung-Peien_US
dc.contributor.author曾憲雄en_US
dc.contributor.authorTseng, Shian-Shyongen_US
dc.date.accessioned2014-12-12T02:09:58Z-
dc.date.available2014-12-12T02:09:58Z-
dc.date.issued1991en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#NT803392002en_US
dc.identifier.urihttp://hdl.handle.net/11536/56392-
dc.description.abstract機器學習對於建立專家系統時的知識擷取相當重要,它可直接從所使用的例子中獲取所要的觀念。為了方便討論。我們將機器學習策略分成符號式機器學習策略以及神經網路式機器學習策略,而符號式機器學習策略又可分成逐漸學習策略以及批次學習策略。但無論應用何种策略,學習效率都將受限於緩慢的學習速度,而學習的正確性則受制於所給的例子是否含有雜訊。 在過去,雖然有很多機器學習的方法已被提出,但非常少之文獻討論到平行機器學習及雜訊處理。在本論文中,我們將乎先探討平行機器學習的可行性。我們嘗試將不同的平行處理技術分別應用在逐漸式學習策略,批次式學習策略,以及神經網路式學習策略,以克服學習速度緩慢的問題。接著我們將擴充原有的學習策略以便適用於雜訊存在的學習環境。同樣地,我們將針對這三種不同的學習策略分別提出不同的演算法。最後,我們將整合逐漸學習策略和批次學習策略以便進一步加快學習速度,兩階段學習觀念將在此一併提出。zh_TW
dc.description.abstractLearning general concepts from a set of traing instances has become increasingly important for artificial intelligence researchers in constructing knowledge-based systems. It provides a good solution to fast building of a prototype knowledge base, avoiding the botteneck of knowledge acquisition. Symbolic learning strategies, according to the ways of processing training instances, can usually be divided into two classes: batch learning strategies and incremental learning strategies. Neural learning, as well as symbolic learning, is another interesting topic in A.I. No matter which strategy is adopted, however, its efficiency is limited by its learning speed and its validity is limited by the noise in the training set. It the first part of this thesis, we shall first study the feasibility of parallel machine learning. Techniques of parallel processing have been attempted to be applied to concept learning for conquering the problem of low learning speed. Three parallel learning models, based on the partiton of learning tasks on multiple processors, the principle of divide-and-conquer, and saving unnecessary checking time, are respectively proposed for batch learning, incremental learning, and neural learning. ID3, version space, and perceptron learning methods are also respectively parallelized for showing how these three parallel learning models can work well. Besides, the validity and relevance of the finally learned concepts heavily depends on the accuracy of the chosen training instances. The data provided to the learning systems usually contain noise in real applications. Modifying the traditional learning metholds for working well in noisy environments is then very important. In the second part of this thesis, ID3,version space, and perceptron learning methods are then respectively generalized for achieving this purpose. The generalized methods respectively possesses some (more or less) of the following additional capabilities: managing uncertain training instances, taking the different importance of different training instances into consideration, utilizing the available priori domain knowledge in guiding the proces of learning, making a trade-off between including the positive training instances and excluding the negative training instances, and decreasing time complexity of learning at the expense of only a little accuracyl. The conventional version space learning algorithm will also be generalized for finding disjunctive concepts in an incremental way. At last, two-phase learning has been designed for effectively solveing the learning problems in which training instances come in a two-stage way Machine learning in real-world situation usually starts from an initial collection of training instances; learning then proceeds off and on as new training instances come intermittently. Applying only batch learning methods or incremental learning methods cannot effectively and correctly attain the rules if training instances come in this two-stage way. Two-phase learning methods by integrating batch learning methods and incremental learning methods are apparently more suitable for solving this kind of learning problems. In summary, we hope the ideas proposed in this thesis could provide some principles to parallel machine learning, noise management, and integration of different learning methods. More effort, of courese, is needed since the proposed models and methods still cannot fit all learning strategies.en_US
dc.language.isozh_TWen_US
dc.subject機器學習zh_TW
dc.subject雜訊處理zh_TW
dc.title機器學習的平行與雜訊處理之研究zh_TW
dc.titleA Study of parallel Processing and Noise Management on Machine Learningen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
顯示於類別:畢業論文