完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHsu, Wei-Lunen_US
dc.contributor.authorChen, Ying-pingen_US
dc.date.accessioned2018-08-21T05:56:51Z-
dc.date.available2018-08-21T05:56:51Z-
dc.date.issued2016-01-01en_US
dc.identifier.issn2376-6816en_US
dc.identifier.urihttp://hdl.handle.net/11536/146725-
dc.description.abstractIn numerous different types of games, the real-time strategy (RTS) ones have always been the focus of gaming competitions, and in this regard, StarCraft can arguably he considered a classic real-time strategy game. Currently, most of the artificial intelligence (AI) players for real-time strategy games cannot reach or get close to the same intelligent level of their human opponents. In order to enhance the ability of AI players and hence improve the playability of games, in this study, we make an attempt to develop for StarCraft a mechanism learning to select an appropriate action to take according to the circumstance. Our empirical results show that action selection can be learned by AI players with the optimization capability of genetic algorithms and that cooperation among identical and/or different types of units is observed. The potential future work and possible research directions are discussed. The developed source code and the obtained results are released as open source.en_US
dc.language.isoen_USen_US
dc.subjectReal-Time Strategy Gameen_US
dc.subjectGenetic Algorithmen_US
dc.titleLearning to Select Actions in StarCraft with Genetic Algorithmsen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2016 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI)en_US
dc.citation.spage270en_US
dc.citation.epage277en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000406594200037en_US
顯示於類別:會議論文