Full metadata record
DC FieldValueLanguage
dc.contributor.authorHu, Jun-Haoen_US
dc.contributor.authorPeng, Wen-Hsiaoen_US
dc.contributor.authorChung, Chia-Huaen_US
dc.date.accessioned2019-04-02T06:04:29Z-
dc.date.available2019-04-02T06:04:29Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn0271-4302en_US
dc.identifier.urihttp://hdl.handle.net/11536/150871-
dc.description.abstractReinforcement learning has proven effective for solving decision making problems. However, its application to modern video codecs has yet to be seen. This paper presents an early attempt to introduce reinforcement learning to HEVC/H.265 intra-frame rate control. The task is to determine a quantization parameter value for every coding tree unit in a frame, with the objective being to minimize the frame-level distortion subject to a rate constraint. We draw an analogy between the rate control problem and the reinforcement learning problem, by considering the texture complexity of coding tree units and bit balance as the environment state, the quantization parameter value as an action that an agent needs to take, and the negative distortion of the coding tree unit as an immediate reward. We train a neural network based on Q-learning to be our agent, which observes the state to evaluate the reward for each possible action. When trained on only limited sequences, the proposed model can already perform comparably with the rate control algorithm in HM-16.15.en_US
dc.language.isoen_USen_US
dc.titleReinforcement Learning for HEVC/H.265 Intra-Frame Rate Controlen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000451218703041en_US
dc.citation.woscount1en_US
Appears in Collections:Conferences Paper