標題: 適用於高畫質立體電視應用之視差估測設計研究
The Study of Disparity Estimation Design for High Definition 3DTV Applications
作者: 曾宇晟
Tseng, Yu-Cheng
張添烜
Chang, Tian-Sheuan
電子研究所
關鍵字: 視差估測;立體電視;超大型積體電路設計;Disparity estimation;3DTV;VLSI design
公開日期: 2011
摘要: 隨著立體電視的問世,人們可以藉由立體視訊獲得新的視覺經驗。立體視訊可以立體攝影機擷取,並經由影像處理技術運算後,可支援多視角與自由視點之立體電視應用。在立體視訊的處理中,視差估測為最重要的技術之一。視差估測可產生拍攝場景之視差圖,可用於虛擬視角視訊的合成。動態影像壓縮標準組織的立體視訊編碼團隊已提出目前最先進視差估測演算法。其演算法可針對立體電視的應用產生高品質的視差圖,但因採用圖形切割演算法導致高運算複雜度與低平行運算的問題。特別對於高畫質視訊,其問題更為嚴重。 為解決以上問題,本論文首先提出初階視差估測演算法,採用訊息傳遞演算法以提高視差估測的運算平行度,並搭配聯合雙邊上取樣演算法以減少運算的畫面大小。其硬體設計面臨之問題,可藉由所提出之硬體架構方法解決。以此初階演算法為基礎,我們進一步提出一高品質視差估測演算法,可改善時間軸一致性與遮蔽之問題,並產生高品質的視差圖。針對高品質視差演算法,我們提出適用於不同實作方法的二快速視差估測演算法。針對軟體程式設計,所提出的稀疏運算之快速演算法可藉由時間軸與空間軸的分析選擇稀疏像素,僅針對稀疏像素更新視差值,達到降低運算時間至62.9%。另一方面,針對超大型積體電路設計,所提出的高硬體效率之快速演算利用新的比對資訊擴散方法可降低運算時間至57.2%,並大幅降低原演算的記憶體成本至0.00029%。客觀評比的結果顯示針對虛擬視角視訊合成之應用,我們所提出的演算法可達到近於現今最先進演算法的高品質。 最後,我們化簡高硬體效率之快速演算法,進而提出高輸出效能的架構設計。其硬體實作結果顯示所提出的視差估測引擎可支援視差範圍128,同時產生三視角HD1080p視差圖,並達到每秒95畫面的輸出速度,也就是每秒75.64G像素視差。總言之,本論文所提出的視差估測設計可滿足高畫質度立體電視應用的需求。
With emerging 3DTVs, human can have new visual experience from 3D videos that can be captured by new stereo camera and further processed by image processing techniques for the 3DTV applications of multi-view or free viewpoint. In the 3D video processing, one of the most important techniques is the disparity estimation that could generate disparity maps for synthesizing virtual-view videos. The state-of-the-art disparity estimation algorithm proposed by the MPEG 3D Video Coding team could deliver high-quality disparity maps, but suffers from high computational complexity and low parallelism due to its graph-cut algorithm, especially for high definition videos. To address the problems, this dissertation first proposes the baseline disparity estimation algorithm that adopts the belief propagation algorithm to increase the parallelism of disparity estimation, and the joint bilateral upsampling algorithm to reduce the computational resolution. Their design challenges could be solved by our proposed architectural design methods. Based on the baseline algorithm, we further propose the high-quality algorithm that could well improve the temporal consistency and occlusion problems, and deliver high performance disparity maps. To accelerate the high-quality algorithm, we propose the two fast algorithms for different implementation method. The sparse-computation fast algorithm could decrease the processed pixels in the spatial and temporal domains to reduce the execution time to 62.9% for the software implementation. On the other hand, for the hardware implementation, we propose the hardware-efficient fast algorithm that could reduce the execution time of high-quality algorithm to 57.2%, and decrease the memory cost of belief propagation to 0.00029% by the proposed cost diffusion method. The objective evaluation results show that our disparity quality is similar to the quality of state-of-the-art algorithm for view synthesis applications. Moreover, we further simplify the hardware-efficient algorithm and propose a high-throughput architectural design. The implementation results shows that the proposed disparity estimation engine could achieve the throughput of 95 frames/s for three view HD1080p disparity maps with 128 disparity levels (i.e. 75.64G pixel-disparities/s). It could satisfy the requirement of high definition 3DTV applications.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079511849
http://hdl.handle.net/11536/41072
顯示於類別:畢業論文


文件中的檔案:

  1. 184901.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。