標題: | 針對 MPEG 自由視角視訊之合成品質導向深度圖優化 A Synthesis-Quality-Oriented Depth Refinement Scheme for MPEG Free Viewpoint Television (FTV) |
作者: | 陳俊吉 Chen, Chun-Chi 彭文孝 Peng, Wen-Hsiao 多媒體工程研究所 |
關鍵字: | 自由視角電視;虛擬視訊合成;深度圖優化;深度圖壓縮;Free View-point Television;Virtual View Synthesis;Depth Map Refinement;Depth Compression |
公開日期: | 2009 |
摘要: | 基於MPEG 自由視角電視標準(FTV)之架構,本論文闡釋一個利用接收到之參考畫面與深度影像,進行深度資訊優化的問題。使用經壓縮過之深度圖進行虛擬視角合成時,存在著合成誤差的缺陷。因此,本文首先利用每個像素之深度值誤差變異數、亮度變異數、真實深度資訊和虛擬相機的位置,建立一組分析合成誤差之模型,並以此模型估測每個像素的合成誤差。其次,我們以此分析模型為基礎,透過檢查像素之亮度梯度來偵測不可靠的深度像素。最後利用候選人區塊視差搜尋(Candidate-based Block Disparity Search)來優化不可靠的深度像素之數值。為了讓上述兩個步驟可以在不同壓縮效應中更加耐用,傳送端需要傳遞額外的資訊給接收端,用以控制合成視訊的品質。實驗結果顯示,本文提出的方法不僅在合成結果之平均PSNR上超越MPEG FTV標準逾1.2 dB,更可穩定的贏過目前最先進的深度資訊優化方法。此外,在很大的程度上,本文的方法可有效移除合成失真,其合成結果可達到近似於虛擬視角之真實影像內容。 This thesis addresses the problem of refining depth information from the received reference and depth images within the MPEG FTV framework. An analytical model is first developed to approximate the per-pixel synthesis distortion (caused by depth-image compression) as a function of depth-error variances, intensity variations, ground-truth depth and virtual camera locations. We then follow the model to detect unreliable depth pixels by inspecting intensity gradients and to refine their values with a candidate-based block disparity search. Additional side information is transmitted to make both operations robust against compression effects. Experimental results show that our scheme offers an average PSNR improvement of 1.2 dB over MPEG FTV and consistently outperforms the state-of-the-art methods. Moreover, it can remove synthesis artifacts to a great extent, producing a result that is very close in appearance to the ground-truth view image. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079657534 http://hdl.handle.net/11536/43540 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.