Full metadata record
DC FieldValueLanguage
dc.contributor.authorYeh, Kun-Haoen_US
dc.contributor.authorWu, I-Chenen_US
dc.contributor.authorHsueh, Chu-Hsuanen_US
dc.contributor.authorChang, Chia-Chuanen_US
dc.contributor.authorLiang, Chao-Chinen_US
dc.contributor.authorChiang, Hanen_US
dc.date.accessioned2018-08-21T05:53:06Z-
dc.date.available2018-08-21T05:53:06Z-
dc.date.issued2017-12-01en_US
dc.identifier.issn1943-068Xen_US
dc.identifier.urihttp://dx.doi.org/10.1109/TCIAIG.2016.2593710en_US
dc.identifier.urihttp://hdl.handle.net/11536/144266-
dc.description.abstractSzubert and Jaskowski successfully used temporal difference (TD) learning together with n-tuple networks for playing the game 2048. However, we observed a phenomenon that the programs based on TD learning still hardly reach large tiles. In this paper, we propose multistage TD (MS-TD) learning, a kind of hierarchical reinforcement learning method, to effectively improve the performance for the rates of reaching large tiles, which are good metrics to analyze the strength of 2048 programs. Our experiments showed significant improvements over the one without using MS-TD learning. Namely, using 3-ply expectimax search, the program with MS-TD learning reached 32768-tiles with a rate of 18.31%, while the one with TD learning did not reach any. After further tuned, our 2048 program reached 32768-tiles with a rate of 31.75% in 10,000 games, and one among these games even reached a 65536-tiles, which is the first ever reaching a 65536-tiles to our knowledge. In addition, MS-TD learning method can be easily applied to other 2048-like games, such as Threes. Based on MS-TD learning, our experiments for Threes also demonstrated similar performance improvement, where the program with MS-TD learning reached 6144-tiles with a rate of 7.83%, while the one with TD learning only reached 0.45%.en_US
dc.language.isoen_USen_US
dc.subject2048en_US
dc.subjectexpectimaxen_US
dc.subjectstochastic puzzle gameen_US
dc.subjecttemporal difference (TD) learningen_US
dc.subjectthreesen_US
dc.titleMultistage Temporal Difference Learning for 2048-Like Gamesen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TCIAIG.2016.2593710en_US
dc.identifier.journalIEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMESen_US
dc.citation.volume9en_US
dc.citation.spage369en_US
dc.citation.epage380en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000418422900005en_US
Appears in Collections:Articles