標題: 場景合成置換與其平行運算處理實現
Scene Composition and Its Parallel Implementation
作者: 謝維倫
杭學鳴
Hsieh, Wei-Lun
Hang, Hsueh-Ming
電子研究所
關鍵字: 場景合成置換;虛擬視點合成;CUDA;平行運算架構;Scene composition;View synthesis;CUDA;Parallel computing structure
公開日期: 2017
摘要: 場景合成置換目前被廣泛應用在電視節目和電影上,一般的應用是要將兩組不同攝影機拍攝的不同場景做適當的組合,而這兩組攝影機可能會朝著不同的方向移動或有不同的旋轉角度,所以要將它們結合起來通常是一項具有挑戰的工作。在經過幾年的研究成果後,我們的場景合成演算法能夠得到不錯品質的組合結果,但還是有一些可以進步的空間。 在本論文中,首先整合了幾年的實驗室研究結果成一個完整的場景合成系統,再來則是加上亮度調整的方法,讓前景與背景兩張不同亮度的影像能夠匹配,使最後的組合結果能看起來更自然。接著進一步的提升在虛擬視點合成時的效率,其中一個方法是先行剪下不必要的合成影像,而另一個方法則是這篇論文最主要的研究成果,我們採用平行化處理的架構,利用 NVIDIA CUDA 來實現我們的虛擬視點合成演算法,原始演算法會依序重複相似的合成步驟在每個獨立的像素點上,所以我們能夠將這些像素點的運算在 NVIDIA GPU 上做平行化處理。然而在採用平行化架構之後,我們會在正向映射演算法(FDW, Forward depth warping)的步驟中遇到深度值互相競爭的問題,當多個不同像素點同時從參考影像映射到 虛擬影像上的同一個像素點時就會產生競爭,所以我們需要適當的安排執行緒來解決競爭的問題。藉由 GPU 平行運算的功能,我們可以省去非常多的運算時間來提升整體組合的效率。
Scene composition is a method widely used in movie and TV production. Generally, two video sequences are taken by different cameras on two different scenes. These two cameras may have different movement and orientations. Thus, merging two sets of 3D videos into one is a very challenging task. Based on the development on the scene composition algorithm in the past few years, we are able to produce a rather good quality virtual view. But there is still some room for further improvement. In this thesis, we first combine all the previous work developed in our laboratory into a complete system. Second, we propose a brightness adjustment technique so that the brightness of the foreground would match that of the background. It would make the composition result looks more natural. And then we improve the efficiency at the view synthesis stage. One step is that we cut out the region not shown in the synthesized picture. The most significant part of our work is using a parallel processing structure, NVIDIA CUDA, to implement the view synthesis task. The original view synthesis method is repeating similar processes on many individual pixels. Therefore, it can be parallelized and run on the NVIDIA GPU (Graphic Processing Unit). However, in the forward depth warping (FDW), we face the multiple depth competition problem. That is, multiple pixels may be mapped to the same target pixel due to their overlapped depth values. To avoid this problem, we need to assign block partition properly in the implementation. With the aid of GPU, we can reduce up to 70% of the computation time in the view synthesis process.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070450257
http://hdl.handle.net/11536/140865
Appears in Collections:Thesis