完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChien, Chun-Liangen_US
dc.contributor.authorLee, Tzu-Chinen_US
dc.contributor.authorHang, Hsueh-Mingen_US
dc.date.accessioned2017-04-21T06:50:17Z-
dc.date.available2017-04-21T06:50:17Z-
dc.date.issued2016en_US
dc.identifier.isbn978-1-5090-3313-3en_US
dc.identifier.issn2161-2021en_US
dc.identifier.urihttp://hdl.handle.net/11536/134322-
dc.description.abstractA view synthesis problem is to generate a virtual view based on the given one or multiple views and their associated depth maps. We adopt the depth image based rendering (DIBR) approach in this paper for synthesizing the new views. No explicit 3D modeling is involved. Another component of this study is the popular commodity RGB-D (color plus depth) cameras. The color and depth images captured by a pair of RGB-D cameras (Microsoft Kinect for Windows v2) are our inputs to synthesize intermediate virtual views between these two cameras. Several methods include depth to color warping, disocclusion filling, and color to color warping are adopted and designed to achieve this target. One of our major contributions is a new disocclusion detection algorithm proposed to improve the disocclusion filling result. Furthermore, an improved camera calibration method is proposed to make use of the additional depth information. Good quality synthesized views are shown at the end.en_US
dc.language.isoen_USen_US
dc.subjectView synthesisen_US
dc.subjectcamera calibrationen_US
dc.subjectbackward warpingen_US
dc.subjectdisocclusion fillingen_US
dc.subjectdepth mapen_US
dc.subjectKinecten_US
dc.titleVIRTUAL VIEW SYNTHESIS USING RGB-D CAMERASen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2016 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON)en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000390840500004en_US
dc.citation.woscount0en_US
顯示於類別:會議論文