完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLiu, Chien-Liangen_US
dc.contributor.authorChang, Chuan-Chinen_US
dc.contributor.authorTseng, Chun-Janen_US
dc.date.accessioned2020-07-01T05:21:15Z-
dc.date.available2020-07-01T05:21:15Z-
dc.date.issued2020-01-01en_US
dc.identifier.issn2169-3536en_US
dc.identifier.urihttp://dx.doi.org/10.1109/ACCESS.2020.2987820en_US
dc.identifier.urihttp://hdl.handle.net/11536/154323-
dc.description.abstractIn the past decades, many optimization methods have been devised and applied to job shop scheduling problem (JSSP) to find the optimal solution. Many methods assumed that the scheduling results were applied to static environments, but the whole environments in the real world are always dynamic. Moreover, many unexpected events such as machine breakdowns and material problems may be present to adversely affect the initial job scheduling. This work views JSSP as a sequential decision making problem and proposes to use deep reinforcement learning to cope with this problem. The combination of deep learning and reinforcement learning avoids handcraft features as used in traditional reinforcement learning, and it is expected that the combination will make the whole learning phase more efficient. Our proposed model comprises actor network and critic network, both including convolution layers and fully connected layer. Actor network agent learns how to behave in different situations, while critic network helps agent evaluate the value of statement then return to actor network. This work proposes a parallel training method, combining asynchronous update as well as deep deterministic policy gradient (DDPG), to train the model. The whole network is trained with parallel training on a multi-agent environment and different simple dispatching rules are considered as actions. We evaluate our proposed model on more than ten instances that are present in a famous benchmark problem library & x2013; OR library. The evaluation results indicate that our method is comparative in static JSSP benchmark problems, and achieves a good balance between makespan and execution time in dynamic environments. Scheduling score of our method is 91.12 & x0025; in static JSSP benchmark problems, and 80.78 & x0025; in dynamic environments.en_US
dc.language.isoen_USen_US
dc.subjectJob shop schedulingen_US
dc.subjectMachine learningen_US
dc.subjectBenchmark testingen_US
dc.subjectDynamic schedulingen_US
dc.subjectLearning (artificial intelligence)en_US
dc.subjectTrainingen_US
dc.subjectOptimizationen_US
dc.subjectJob shop scheduling problem (JSSP)en_US
dc.subjectdeep reinforcement learningen_US
dc.subjectactor-critic networken_US
dc.subjectparallel trainingen_US
dc.titleActor-Critic Deep Reinforcement Learning for Solving Job Shop Scheduling Problemsen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ACCESS.2020.2987820en_US
dc.identifier.journalIEEE ACCESSen_US
dc.citation.volume8en_US
dc.citation.spage71752en_US
dc.citation.epage71762en_US
dc.contributor.department工業工程與管理學系zh_TW
dc.contributor.departmentDepartment of Industrial Engineering and Managementen_US
dc.identifier.wosnumberWOS:000530814400002en_US
dc.citation.woscount0en_US
顯示於類別:期刊論文