完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Lin, CT | en_US |
dc.contributor.author | Jou, CP | en_US |
dc.contributor.author | Lin, CJ | en_US |
dc.date.accessioned | 2014-12-08T15:49:17Z | - |
dc.date.available | 2014-12-08T15:49:17Z | - |
dc.date.issued | 1998-03-01 | en_US |
dc.identifier.issn | 0020-7721 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/32777 | - |
dc.description.abstract | A genetic reinforcement neural network (GRNN) is proposed to solve various reinforcement learning problems. The proposed GRNN is constructed by integrating two feedforward multilayer networks. One neural network acts as an action network for determining the outputs (actions) of the GRNN, and the other as a critic network to help the learning of the action network. Using the temporal difference prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the genetic algorithm (GA) to adapt itself according to the internal reinforcement signal. The key concept of the proposed GRNN learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA. This learning scheme forms a novel hybrid GA, which consists of the temporal difference and gradient descent methods for the critic network learning, and the GA for the action network learning. By using the internal reinforcement signal as the fitness function, the GA can evaluate the candidate solutions (chromosomes) regularly, even during the period without external reinforcement feedback from the environment. Hence, the GA can proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning because a reinforcement signal. This can usually accelerate the GA learning because a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problems. Computer simulations have been conducted to illustrate the performance and applicability of the proposed learning scheme. | en_US |
dc.language.iso | en_US | en_US |
dc.title | GA-based reinforcement learning for neural networks | en_US |
dc.type | Article | en_US |
dc.identifier.journal | INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE | en_US |
dc.citation.volume | 29 | en_US |
dc.citation.issue | 3 | en_US |
dc.citation.spage | 233 | en_US |
dc.citation.epage | 247 | en_US |
dc.contributor.department | 電控工程研究所 | zh_TW |
dc.contributor.department | Institute of Electrical and Control Engineering | en_US |
dc.identifier.wosnumber | WOS:000072458800002 | - |
dc.citation.woscount | 3 | - |
顯示於類別: | 期刊論文 |