完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Hsu, Wei-Lun | en_US |
dc.contributor.author | Chen, Ying-ping | en_US |
dc.date.accessioned | 2018-08-21T05:56:51Z | - |
dc.date.available | 2018-08-21T05:56:51Z | - |
dc.date.issued | 2016-01-01 | en_US |
dc.identifier.issn | 2376-6816 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/146725 | - |
dc.description.abstract | In numerous different types of games, the real-time strategy (RTS) ones have always been the focus of gaming competitions, and in this regard, StarCraft can arguably he considered a classic real-time strategy game. Currently, most of the artificial intelligence (AI) players for real-time strategy games cannot reach or get close to the same intelligent level of their human opponents. In order to enhance the ability of AI players and hence improve the playability of games, in this study, we make an attempt to develop for StarCraft a mechanism learning to select an appropriate action to take according to the circumstance. Our empirical results show that action selection can be learned by AI players with the optimization capability of genetic algorithms and that cooperation among identical and/or different types of units is observed. The potential future work and possible research directions are discussed. The developed source code and the obtained results are released as open source. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Real-Time Strategy Game | en_US |
dc.subject | Genetic Algorithm | en_US |
dc.title | Learning to Select Actions in StarCraft with Genetic Algorithms | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2016 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI) | en_US |
dc.citation.spage | 270 | en_US |
dc.citation.epage | 277 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000406594200037 | en_US |
顯示於類別: | 會議論文 |