完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLing, Chih-Hungen_US
dc.contributor.authorLin, Chia-Wenen_US
dc.contributor.authorSu, Chih-Wenen_US
dc.contributor.authorLiao, Hong-Yuan Marken_US
dc.contributor.authorChen, Yong-Shengen_US
dc.date.accessioned2014-12-08T15:20:30Z-
dc.date.available2014-12-08T15:20:30Z-
dc.date.issued2009-01-01en_US
dc.identifier.isbn978-1-4244-5653-6en_US
dc.identifier.issnen_US
dc.identifier.urihttp://hdl.handle.net/11536/14600-
dc.description.abstractThis paper presents a novel framework for object-based video inpainting. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and then performs patch-based image inpainting to repair the partially damaged object trajectories in the 2-D slices. The completed slices are subsequently combined to obtain a sequence of virtual contours of the damaged object. The virtual contours and a posture sequence retrieval technique are then used to retrieve the most similar sequence of object postures in the available non-occluded postures. Key-posture selection and indexing are performed to reduce the complexity of posture sequence retrieval. We also propose a synthetic posture generation scheme that enriches the collection of key-postures so as to reduce the effect of insufficient key-postures. Our experimental results demonstrate that the proposed method can maintain the spatial consistency and temporal motion continuity of an object simultaneously.en_US
dc.language.isoen_USen_US
dc.subjectvideo inpainitngen_US
dc.subjectobject completionen_US
dc.subjectposture mappingen_US
dc.subjectsynthetic postureen_US
dc.titleVIDEO OBJECT INPAINTING USING POSTURE MAPPINGen_US
dc.typeArticleen_US
dc.identifier.journal2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6en_US
dc.citation.volumeen_US
dc.citation.issueen_US
dc.citation.spage2749en_US
dc.citation.epage2752en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
顯示於類別:會議論文