標題: 模糊規則類神經網路之結構設計與學習方法
The Structure Design and Learning Methodology for a Fuzzy-Rule- Based Neural Network
作者: 單智君
Jean Jyh-Jiun Shann
傅心家
Hsin-Chia Fu
資訊科學與工程研究所
關鍵字: 類神經網路;模糊系統;學習法則;刪減方法;Neural Networks;Fuzzy Systems;Learning Algorithms; Pruning Methods
公開日期: 1993
摘要: 在本篇論文中,我們提出一模糊類神經網路(Fuzzy Neural Network, FNN)之模式,以學習模糊邏輯規則系統(Fuzzy-logic rule based system)中的知識。此模糊類神經網路為前推式(feedforward) 網路,是 根據模糊系統之推理過程(inference process) 設計而成,其中包含輸 入(Input)、模糊化(Fuzzification)、及(AND)、或(OR)、和反模糊化 (Defuzzification)等五層不同的神經元。在此模糊類神經網路中可學習 的知識如下:(一)模糊規則之激發強度(the firing strengths of fuzzy rules),(二)模糊交、聯集運算元之參數(the parameters of parametric fuzzy intersection and union operators), 和(三)語言 變數之鐘形隸屬度函數(the bell-shaped membership functions for the input and output linguistic variables)。針對此模糊類神經網路 ,我們提出的學習程序包含三個不同時相,分別為:(一)誤差回傳訓練 期(the Error Backpropagation Training, EBP-Training Phase),(二 )隸屬度函數刪減期(the Membership Function Pruning, MF-Pruning, Phase), 和(三)規則刪減期(the Rule Pruning, R-Pruning, Phase)。 在EBP-Training Phase中,此模糊類神經網路可迅速且精確地學得知識。 而後,在MF-Pruning Phase及R-Pruning Phase 中將多餘的隸屬度函數和 模糊規則刪除,以縮減網路之結構,並學得精簡、準確的模糊規則資料庫 。在模擬實驗中,此模糊類神經網路及其三時相學習程序表現了非常優越 的學習能力和刪減功能。 In this dissertation,a fuzzy neural network (FNN) for learning knowledge of a fuzzy-logic rule based system is presented. The FNN is a feedforward network consisted of five layers: the Input, Fuzzification, AND, OR, and Defuzzification layers, according to the inference process of fuzzy rule based systems. The knowledge that can be learned by the FNN includes the firing strengths of fuzzy rules, the parameters of the parametric fuzzy intersection and union operators, and the bell- shaped membership functions for the linguistic values of input and output linguistic variables. Initially, the network is constructed to contain all the possible nodes and links according to the numbers of the linguistic variables and values of the system. We propose a procedure which consists of three different phases for learning the knowledge of the network. The three phases are (1)the Error Backpropagation Training (EBP- Training) Phase: Train the learnable parameters based on the gradient descent concept of backpropagation learning algorithms. (2)the Membership Function Pruning (MF-Pruning) Phase: Prune the redundant membership functions for input and output linguistic variables and then retrain or continuously train the learnable parameters. (3)the Rule-Pruning (R-Pruning) Phase: Prune the redundant fuzzy rules and then retrain or continuously train the learnable parameters. The learning algorithm in the training phase enables the network to learn the knowledge as precisely as backpropagation-type learning algorithms and yet as quickly as competitive-type learning algorithms. After the training, the MF-Pruning and R-Pruning phases are performed to delete redundant membership functions and fuzzy rules, respectively. In each pruning, the structure of the FNN can be reduced by deleting the nodes and links related to the redundant membership functions and fuzzy rules.
URI: http://140.113.39.130/cdrfb3/record/nctu/#NT820392075
http://hdl.handle.net/11536/57883
Appears in Collections:Thesis