标题: 模糊规则类神经网路之结构设计与学习方法
The Structure Design and Learning Methodology for a Fuzzy-Rule- Based Neural Network
作者: 单智君
Jean Jyh-Jiun Shann
傅心家
Hsin-Chia Fu
资讯科学与工程研究所
关键字: 类神经网路;模糊系统;学习法则;删减方法;Neural Networks;Fuzzy Systems;Learning Algorithms; Pruning Methods
公开日期: 1993
摘要: 在本篇论文中,我们提出一模糊类神经网路(Fuzzy Neural Network,
FNN)之模式,以学习模糊逻辑规则系统(Fuzzy-logic rule based
system)中的知识。此模糊类神经网路为前推式(feedforward) 网路,是
根据模糊系统之推理过程(inference process) 设计而成,其中包含输
入(Input)、模糊化(Fuzzification)、及(AND)、或(OR)、和反模糊化
(Defuzzification)等五层不同的神经元。在此模糊类神经网路中可学习
的知识如下:(一)模糊规则之激发强度(the firing strengths of
fuzzy rules),(二)模糊交、联集运算元之参数(the parameters of
parametric fuzzy intersection and union operators), 和(三)语言
变数之钟形隶属度函数(the bell-shaped membership functions for
the input and output linguistic variables)。针对此模糊类神经网路
,我们提出的学习程序包含三个不同时相,分别为:(一)误差回传训练
期(the Error Backpropagation Training, EBP-Training Phase),(二
)隶属度函数删减期(the Membership Function Pruning, MF-Pruning,
Phase), 和(三)规则删减期(the Rule Pruning, R-Pruning, Phase)。
在EBP-Training Phase中,此模糊类神经网路可迅速且精确地学得知识。
而后,在MF-Pruning Phase及R-Pruning Phase 中将多余的隶属度函数和
模糊规则删除,以缩减网路之结构,并学得精简、准确的模糊规则资料库
。在模拟实验中,此模糊类神经网路及其三时相学习程序表现了非常优越
的学习能力和删减功能。
In this dissertation,a fuzzy neural network (FNN) for learning
knowledge of a fuzzy-logic rule based system is presented. The
FNN is a feedforward network consisted of five layers: the
Input, Fuzzification, AND, OR, and Defuzzification layers,
according to the inference process of fuzzy rule based systems.
The knowledge that can be learned by the FNN includes the
firing strengths of fuzzy rules, the parameters of the
parametric fuzzy intersection and union operators, and the bell-
shaped membership functions for the linguistic values of input
and output linguistic variables. Initially, the network is
constructed to contain all the possible nodes and links
according to the numbers of the linguistic variables and values
of the system. We propose a procedure which consists of three
different phases for learning the knowledge of the network. The
three phases are (1)the Error Backpropagation Training (EBP-
Training) Phase: Train the learnable parameters based on the
gradient descent concept of backpropagation learning
algorithms. (2)the Membership Function Pruning (MF-Pruning)
Phase: Prune the redundant membership functions for input and
output linguistic variables and then retrain or continuously
train the learnable parameters. (3)the Rule-Pruning (R-Pruning)
Phase: Prune the redundant fuzzy rules and then retrain or
continuously train the learnable parameters. The learning
algorithm in the training phase enables the network to learn
the knowledge as precisely as backpropagation-type learning
algorithms and yet as quickly as competitive-type learning
algorithms. After the training, the MF-Pruning and R-Pruning
phases are performed to delete redundant membership functions
and fuzzy rules, respectively. In each pruning, the structure
of the FNN can be reduced by deleting the nodes and links
related to the redundant membership functions and fuzzy rules.
URI: http://140.113.39.130/cdrfb3/record/nctu/#NT820392075
http://hdl.handle.net/11536/57883
显示于类别:Thesis