完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLin, CTen_US
dc.contributor.authorJou, CPen_US
dc.date.accessioned2014-12-08T15:45:27Z-
dc.date.available2014-12-08T15:45:27Z-
dc.date.issued2000-04-01en_US
dc.identifier.issn1083-4419en_US
dc.identifier.urihttp://dx.doi.org/10.1109/3477.836376en_US
dc.identifier.urihttp://hdl.handle.net/11536/30611-
dc.description.abstractThis paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task, The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system, The action network can be a normal neural network or a neural fuzzy network, Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal, The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem, The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.en_US
dc.language.isoen_USen_US
dc.subjectaction networken_US
dc.subjectactive magnetic bearingen_US
dc.subjectadaptive heuristic criticen_US
dc.subjectcritic networken_US
dc.titleGA-based fuzzy reinforcement learning for control of a magnetic bearing systemen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/3477.836376en_US
dc.identifier.journalIEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICSen_US
dc.citation.volume30en_US
dc.citation.issue2en_US
dc.citation.spage276en_US
dc.citation.epage289en_US
dc.contributor.department電控工程研究所zh_TW
dc.contributor.departmentInstitute of Electrical and Control Engineeringen_US
dc.identifier.wosnumberWOS:000086532400003-
dc.citation.woscount49-
顯示於類別:期刊論文


文件中的檔案:

  1. 000086532400003.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。