完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 張桓睿 | zh_TW |
dc.contributor.author | 杭學鳴 | zh_TW |
dc.contributor.author | Chang, Huan-Rui | en_US |
dc.contributor.author | Hang, Hsueh-Ming | en_US |
dc.date.accessioned | 2018-01-24T07:39:14Z | - |
dc.date.available | 2018-01-24T07:39:14Z | - |
dc.date.issued | 2017 | en_US |
dc.identifier.uri | http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070350303 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/140385 | - |
dc.description.abstract | 虛擬視影像合成是使用多台相機在同一時間點所拍攝到的彩色影像與深度圖來進行虛擬視點合成。傳統的虛擬視點合成系統一般使用兩台(以上)的相機排在同一水平位置上來拍攝場景,並且合成出相機基線上的虛擬視點影像。然而,在許多應用上我們可能想合成的虛擬視點影像是位於相機基線以外的部分,這種合成架構我們稱之為廣角虛擬視點合成。廣角虛擬視點合成需要解決合成結果有許多小的破洞以及大的遮蔽區。在本論文中,我們採用二乘二矩陣排列Kinect v2架構來設計我們的廣角虛擬視點合成系統。 在本論文中,我們的目標是實現一個可以運用在實際情況的廣角虛擬視點合成系統。主要有兩個貢獻,第一個貢獻是我們解決了四台Kinect v2之間無法完全同步拍攝的問題。由於Kinect v2並沒有提供可控制的外部同步訊號,因此四台Kinect v2彼此拍攝的時間點並不完全一致。我們使用時間校準軟體來解決四台Kinect v2彼此不同步拍攝的問題。本論文的第二個貢獻是我們提出了多視點圖融合演算法,這個方法可以濾除因為原始拍攝到的彩色影像與深度圖沒有對的很精準所造成的錯誤映射像素,同時也可有效改善了合成的影像品質。我們所提出的多視點圖融合演算法分成三個部分。第一個部分是針對不在深度邊緣周圍的像素使用主虛擬視點的彩色及深度像素來合成。第二個部分會解決紋理與深度未對準以及虛擬視點的紋理之間未對準所造成的雜訊。第三個部分會從四個映射的虛擬視點中選取最佳匹配的像素來合成剩餘的虛擬視點像素。最後,我們觀察虛擬視點相機在不同傾斜或縮放情況下的合成結果,本論文所提出的多視點圖融合演算法在實際情況下可以有好的主觀合成影像品質。 | zh_TW |
dc.description.abstract | Virtual view synthesis technique uses the color images and depth maps captured by multiple cameras to synthesize a virtual view image. The conventional view synthesis system typically arranges two or more cameras along a baseline line to capture and to synthesize the virtual view lies on the baseline. However, in many applications we may want to synthesize a virtual view beyond the baseline, which is called wide-angle view synthesis. Wide-angle view synthesis needs to solve the problems of many small cracks and large disocclusion regions in the synthesized virtual view. In this thesis, we adopt a two-by–two array of Kinect v2 cameras to design our wide-angle view synthesis system. In this thesis, our target is practical real world case wide-angle view synthesis system. There are two key contributions in this thesis. The first is that we solve the synchronization problem among multiple Kinects in capturing images. Without external sync signal, the captured scenes of four Kinects are not exactly at the same time instance. We implement a clock adjustment system to solve this problem based on PC clock synchronization software. The second contribution is that we propose a multi-view blending algorithm that can remove the wrong pixels due to texture-depth misalignment and texture-texture misalignment and provide a clear improvement on the synthesized image quality. The proposed multi-view blending algorithm consists of three parts. The first part picks up the dominate reference view to synthesize the corresponding virtual image pixels. The second part filters out the noises due to texture-depth misalignment and texture-texture misalignment. The third part chooses the best matched color pixels from the four warped virtual views to synthesize the remaining holes. At the end, we examine the quality of synthesized views for various camera tilt and zoom-in/out cases. The proposed multi-view blending algorithm achieves good subjective synthesized image quality in these real world cases. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | 虛擬視影像合成 | zh_TW |
dc.subject | 遮蔽區 | zh_TW |
dc.subject | 視點融合 | zh_TW |
dc.subject | 深度圖 | zh_TW |
dc.subject | 背景 | zh_TW |
dc.subject | 映射 | zh_TW |
dc.subject | Virtual View Synthesis | en_US |
dc.subject | Disocclusion Regions | en_US |
dc.subject | View Merging | en_US |
dc.subject | Depth Map | en_US |
dc.subject | Background | en_US |
dc.subject | Warping | en_US |
dc.title | 二乘二矩陣排列Kinect v2的廣角虛擬視點影像合成 | zh_TW |
dc.title | Wide Angle Virtual View Synthesis Using Two-by-Two Matrix Kinect V2 | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 電子研究所 | zh_TW |
顯示於類別: | 畢業論文 |