標題: 結合背景建模的廣角度虛擬視點合成
Wide-Angle Virtual View Synthesis with Depth-Based Background Modeling
作者: 劉育綸
Liu, Yu-Lun
杭學鳴
Hang, Hsueh-Ming
電子工程學系 電子研究所
關鍵字: 虛擬視點合成;virtual view synthesis
公開日期: 2013
摘要: 3D虛擬視點合成系統對於未來的多媒體應用越來越受歡迎。因此,ITU/ISO MPEG聯合國際標準委員會目前正在制定RGB-D格式的標準。基於給定的RGB與深度資訊生成(合成)一張高品質的圖像是3D虛擬視點系統中非常重要的組成部分。 傳統的虛擬視點合成主要著重在合成相機基線上的虛擬視點。然而在許多多媒體的應用上,如視頻合成,我們都需要合成基線外的視角以達到更好的主觀品質。但是廣角度(有縮放效果)的合成會產生許多瑕疵,例如小裂縫與大遮蔽區域。如何產生一個有著較好主觀品質的廣角度虛擬視點是我們的主要目的。 接著我們根據現有的自由視點架構,一共改進四個地方,分別是快速逆向深度映射演算法、基於雙三次插值的逆向紋理映射、瑕疵抑制技術、以及基於深度的背景建模。逆向景深映射演算法目的在於產生一張包含較少量化誤差的映射深度圖。由目標視角影像出發,並嘗試所有可能的目標深度值,帶入點映射公式,推算出其在參考視角影像中的座標與深度值,並用這些值算出誤差,即可得到一張不會包含量化誤差的映射後的目標視角深度圖。基於深度的背景建模主要著重在使用深度圖與彩色影像產生一影片序列的全域背景模型。第一個關鍵概念是,近的物體會擋住後方的場景。有了深度資訊的幫助,我們便可以識別出較近的移動物體。接下來,我們開發一套在深度圖與彩色圖之間遞迴的遞迴演算法。與現有的方法比較,我們所開發的方法可以產生較好品質的背景影像,而且可以同時增進背景的深度圖。 以上演算法皆會利用MPEG測試序列來做測試。實驗結果展示出每項技術對於最後的虛擬視點合成,其主觀品質皆有不錯的改善。
The 3D virtual view synthesis systems is becoming popular for future multimedia applications. Therefore, the ITU/ISO MPEG joint international standard committee is currently working on the standards for the RGB-D format. Generating (synthesizing) a high quality pictures based on the given RGB and depth information is a very critical component in a 3D virtual view system. The conventional virtual view synthesis mainly focuses on how to synthesize a virtual view on the baseline aligned with two given views. However, in many multimedia applications like video composition, we need to synthesize a new view beyond the baseline to achieve a better subjective quality. But the wide-angle synthesis (with camera tilt and zoom in/out) generally results in many artifacts such as small cracks and large disocclusion regions. How to create a wide-angle virtual view with good subjective quality is our main purpose. To accomplish the aforementioned target, four techniques have been developed and presented in this thesis. They are fast backward depth warping algorithm, backward texture warping with bicubic interpolation, artifact reduction techniques, and depth-based background modeling. Backward depth warping algorithm mainly focuses on creating a warped depth map with nearly no quantization errors. We start from the target view by substituting every possible target depth candidate into the point warping equation and repeat the process until a depth value achieves the minimum difference (in coordinate and depth value) between the warped target view and the reference view. Finally, a depth map small quantization error can be obtained. Depth-based background modeling mainly focuses on creating a global background model of a video sequence using the depth maps together with the RGB pictures. The first key concept is the near objects block the scenes at the back. With the aid of depth information, we can identify the closer moving objects. Secondly, we develop a recursive algorithm that iterates between the depth maps and color pictures. Comparing to the existing schemes, our proposed method can produce better quality background images and it also improves the background depth map at the same time. All the above techniques have been tested on the MPEG test sequences. The results show that they can prove visible subjective quality improvement on the synthesized virtual view videos.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT070150234
http://hdl.handle.net/11536/74965
Appears in Collections:Thesis