完整後設資料紀錄
DC 欄位語言
dc.contributor.authorTseng, Huan-Hsinen_US
dc.contributor.authorLuo, Yien_US
dc.contributor.authorCui, Sunanen_US
dc.contributor.authorChien, Jen-Tzungen_US
dc.contributor.authorTen Haken, Randall K.en_US
dc.contributor.authorEl Naqa, Issamen_US
dc.date.accessioned2019-04-02T05:59:57Z-
dc.date.available2019-04-02T05:59:57Z-
dc.date.issued2017-12-01en_US
dc.identifier.issn0094-2405en_US
dc.identifier.urihttp://dx.doi.org/10.1002/mp.12625en_US
dc.identifier.urihttp://hdl.handle.net/11536/147864-
dc.description.abstractPurpose: To investigate deep reinforcement learning (DRL) based on historical treatment plans for developing automated radiation adaptation protocols for nonsmall cell lung cancer (NSCLC) patients that aim to maximize tumor local control at reduced rates of radiation pneumonitis grade 2 (RP2). Methods: In a retrospective population of 114 NSCLC patients who received radiotherapy, a three-component neural networks framework was developed for deep reinforcement learning (DRL) of dose fractionation adaptation. Large-scale patient characteristics included clinical, genetic, and imaging radiomics features in addition to tumor and lung dosimetric variables. First, a generative adversarial network (GAN) was employed to learn patient population characteristics necessary for DRL training from a relatively limited sample size. Second, a radiotherapy artificial environment (RAE) was reconstructed by a deep neural network (DNN) utilizing both original and synthetic data (by GAN) to estimate the transition probabilities for adaptation of personalized radiotherapy patients' treatment courses. Third, a deep Q-network (DQN) was applied to the RAE for choosing the optimal dose in a response-adapted treatment setting. This multicomponent reinforcement learning approach was benchmarked against real clinical decisions that were applied in an adaptive dose escalation clinical protocol. In which, 34 patients were treated based on avid PET signal in the tumor and constrained by a 17.2% normal tissue complication probability (NTCP) limit for RP2. The uncomplicated cure probability (P+) was used as a baseline reward function in the DRL. Results: Taking our adaptive dose escalation protocol as a blueprint for the proposed DRL (GAN + RAE + DQN) architecture, we obtained an automated dose adaptation estimate for use at similar to 2/3 of the way into the radiotherapy treatment course. By letting the DQN component freely control the estimated adaptive dose per fraction (ranging from 1-5 Gy), the DRL automatically favored dose escalation/de-escalation between 1.5 and 3.8 Gy, a range similar to that used in the clinical protocol. The same DQN yielded two patterns of dose escalation for the 34 test patients, but with different reward variants. First, using the baseline P+ reward function, individual adaptive fraction doses of the DQN had similar tendencies to the clinical data with an RMSE = 0.76 Gy; but adaptations suggested by the DQN were generally lower in magnitude (less aggressive). Second, by adjusting the P+ reward function with higher emphasis on mitigating local failure, better matching of doses between the DQN and the clinical protocol was achieved with an RMSE = 0.5 Gy. Moreover, the decisions selected by the DQN seemed to have better concordance with patients eventual outcomes. In comparison, the traditional temporal difference (TD) algorithm for reinforcement learning yielded an RMSE = 3.3 Gy due to numerical instabilities and lack of sufficient learning. Conclusion: We demonstrated that automated dose adaptation by DRL is a feasible and a promising approach for achieving similar results to those chosen by clinicians. The process may require customization of the reward function if individual cases were to be considered. However, development of this framework into a fully credible autonomous system for clinical decision support would require further validation on larger multi-institutional datasets. (C) 2017 American Association of Physicists in Medicine.en_US
dc.language.isoen_USen_US
dc.subjectadaptive radiotherapyen_US
dc.subjectdeep learningen_US
dc.subjectlung canceren_US
dc.subjectreinforcement learningen_US
dc.titleDeep reinforcement learning for automated radiation adaptation in lung canceren_US
dc.typeArticleen_US
dc.identifier.doi10.1002/mp.12625en_US
dc.identifier.journalMEDICAL PHYSICSen_US
dc.citation.volume44en_US
dc.citation.spage6690en_US
dc.citation.epage6705en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000425379200055en_US
dc.citation.woscount11en_US
顯示於類別:期刊論文