標題: | 具物體邊界精緻化之高畫質視角影像合成設計 High Definition View Synthesis Design with Object Boundary Refinement |
作者: | 劉楷 Liu, Kai 張添烜 Chang, Tian-Sheuan 電子工程學系 電子研究所 |
關鍵字: | 物體邊界精緻化;虛擬視角影像合成;高畫質影像合成設計;物體邊界雜訊;Object boundary refinement;Virtual view synthesis;High definition view synthesis design;Object boundary noise |
公開日期: | 2015 |
摘要: | 視點影像合成技術(View Synthesis)可利用有限的輸入影像合成虛擬視角的影像。由MPEG-FTV發展之影像合成軟體(View Synthesis Reference Software, VSRS)利用兩個輸入影像和其對應的深度資訊去產生新視角的影像,這種利用深度影像繪圖法(Depth Image-Based Rendering, DIBR)的影像合成技術是現在很常見的作法。然而,在這樣的架構下合成的虛擬影像依然存在一些會干擾人眼觀賞品質的雜訊。
本篇論文主要著重在減少以及去除出現在影像物體邊界的雜訊,人眼對於物體的變形以及物體結構的破壞是很敏感的,因此,如果物體的邊界受到破壞,觀察者很容易會發現影像的不自然進而影響觀看品質。我們在VSRS的架構下利用影像和深度資訊裡不同區域的特性加入了一些能夠處理這些影像雜訊的功能,進而將物體邊界給精緻化。透過分析邊界雜訊產生之原因,我們提出區域導向之邊界精緻化演算法來處理起因於不可靠像素的邊界雜訊,因此能提升整體的影像品質。這些在物體邊界的不可靠像素取決於影像的邊緣以及深度圖的破洞和遮蔽區域,有別於之前其他研究提出的利用影像或深度邊緣與破洞資訊,然後,我們從本身或另一邊的影像選取更可靠的像素取代這些不可靠像素。我們的演算法模擬的影像品質在PSNR和SSIM的指標上略佳於VSRS (0.8dB)且觀看的品質優於VSRS。此外,我們也將我們的演算法透過聯電90奈米製程硬體化實現,我們的硬體設計共消耗了199.82kB的邏輯閘以及79.28的記憶體使用量。此設計可以在200MHz的速度下即時處理高畫質的影像。 View synthesis is a technique that generates virtual views from limited source views. View synthesis reference software (VSRS) proposed by MPEG-FTV forum which utilizes two-view and their corresponding depth information to generate novel views and is commonly used for depth image-based rendering (DIBR) based view synthesis nowadays. However, there are some unwanted artifacts in the synthesized contents. This thesis aims at removing the artifacts occurs around object boundaries, which is sensitive to human eyes and make the synthesized view looks unnatural. Base on VSRS configuration, we use the properties of different regions which can be extracted from source texture and depth in order to refine the object boundaries effectively. By analyzing how the boundary artifacts appear, we propose a region directed boundary refinement (RDBR) method to deal with different boundary artifacts due to unreliable pixels to improve the overall visual quality. The unreliable pixels around the object boundaries are decided by combinations of regions from texture edges, and hole or occlusion in the depth map instead of edges or hole only in the previous methods, and then replaced with more reliable ones from self or the other view. The simulated quality shows slightly better PSNR (0.8 dB) and SSIM index when compared with that of VSRS but has better perceptual visual quality. Furthermore, we implement our RDBR method with UMC90nm process and the design consumes 199.82k gate count and 79.28kB of memory usage. Furthermore, our design can process HD1080p in real-time at 200MHz. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT070250197 http://hdl.handle.net/11536/127030 |
顯示於類別: | 畢業論文 |