完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHo, Yung-Hanen_US
dc.contributor.authorCho, Chuan-Yuanen_US
dc.contributor.authorPeng, Wen-Hsiaoen_US
dc.date.accessioned2020-05-05T00:01:59Z-
dc.date.available2020-05-05T00:01:59Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-5386-6249-6en_US
dc.identifier.issn1522-4880en_US
dc.identifier.urihttp://hdl.handle.net/11536/154040-
dc.description.abstractThis paper introduces a hybrid video prediction scheme that combines the classic parametric overlapped block motion compensation (POBMC) technique with neural networks. Most learning-based video prediction methods rely on a black-box-like model for either direct generation of future video frames or estimation of a dense motion field. The model complexity often increases drastically with frame resolution. Departing from pure black-box approaches, this paper leverages the theoretically-grounded POBMC in a reinforcement learning framework to estimate a sparse motion field for future frame warping. Two neural networks are trained to identify critical points in the motion field for motion estimation. We train our model on 10k unlabeled frames in KITTI dataset and achieve the state-of-the-art SSIM score of 0.923 on CaltechPed and an average SSIM scroe of 0.856 on Common Intermediate Format (CIF) standard sequences.en_US
dc.language.isoen_USen_US
dc.subjectReinforcement learningen_US
dc.subjectdeep video predictionen_US
dc.titleDEEP REINFORCEMENT LEARNING FOR VIDEO PREDICTIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)en_US
dc.citation.spage604en_US
dc.citation.epage608en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000521828600120en_US
dc.citation.woscount0en_US
顯示於類別:會議論文