完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLin, CTen_US
dc.contributor.authorJou, CPen_US
dc.contributor.authorLin, CJen_US
dc.date.accessioned2014-12-08T15:49:17Z-
dc.date.available2014-12-08T15:49:17Z-
dc.date.issued1998-03-01en_US
dc.identifier.issn0020-7721en_US
dc.identifier.urihttp://hdl.handle.net/11536/32777-
dc.description.abstractA genetic reinforcement neural network (GRNN) is proposed to solve various reinforcement learning problems. The proposed GRNN is constructed by integrating two feedforward multilayer networks. One neural network acts as an action network for determining the outputs (actions) of the GRNN, and the other as a critic network to help the learning of the action network. Using the temporal difference prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the genetic algorithm (GA) to adapt itself according to the internal reinforcement signal. The key concept of the proposed GRNN learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA. This learning scheme forms a novel hybrid GA, which consists of the temporal difference and gradient descent methods for the critic network learning, and the GA for the action network learning. By using the internal reinforcement signal as the fitness function, the GA can evaluate the candidate solutions (chromosomes) regularly, even during the period without external reinforcement feedback from the environment. Hence, the GA can proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning because a reinforcement signal. This can usually accelerate the GA learning because a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problems. Computer simulations have been conducted to illustrate the performance and applicability of the proposed learning scheme.en_US
dc.language.isoen_USen_US
dc.titleGA-based reinforcement learning for neural networksen_US
dc.typeArticleen_US
dc.identifier.journalINTERNATIONAL JOURNAL OF SYSTEMS SCIENCEen_US
dc.citation.volume29en_US
dc.citation.issue3en_US
dc.citation.spage233en_US
dc.citation.epage247en_US
dc.contributor.department電控工程研究所zh_TW
dc.contributor.departmentInstitute of Electrical and Control Engineeringen_US
dc.identifier.wosnumberWOS:000072458800002-
dc.citation.woscount3-
顯示於類別:期刊論文


文件中的檔案:

  1. 000072458800002.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。