完整後設資料紀錄
DC 欄位語言
dc.contributor.author張鈞凱en_US
dc.contributor.authorChang, Chun-Kaien_US
dc.contributor.author杭學鳴en_US
dc.contributor.authorHang, Hsueh-Mingen_US
dc.date.accessioned2014-12-12T02:34:33Z-
dc.date.available2014-12-12T02:34:33Z-
dc.date.issued2012en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070050206en_US
dc.identifier.urihttp://hdl.handle.net/11536/72284-
dc.description.abstract立體數位內容日益被重視,新型態的技術包含了自由視點視訊(FTV, Free Viewpoint Television)與擴增實境(AR, Augmented Reality),這些應用以任意視點合成技術為最主要的關鍵議題。有許多的任意視點合成演算法被提出,通常都是利用多重影像以及其對應的深度資訊圖來產生虛擬試點的影像以達到任意視點的效果。我們利用這種基於影像與深度的影像合成渲染(DIBR, Depth Image-based Rendering)來產生背景置換後的立體視訊內容。 輸入為兩組多個攝影機分別拍攝的兩組視訊,我們希望結合這些輸入來產生新的立體場景。此立體場景由其中一組輸入的前景物體,與另外一組的背景場景共同組成。為了這個目的,我們將以多組視訊間場景間不匹配(mismatch)的角度來觀察,在此論文中主要將討論包括攝影機參數以及攝影機定位的不匹配。當使用者在背景影像中選取了定位點(landing point),我們需要經由改變攝影機相關參數來合成出背景場景的對應虛擬視角(配合前景攝影機),以達成背景置換。這樣的方式可以大幅增加創作的自由度。 相較於傳統的影像創作(Image Composition),上述的過程需要利用到深度幾何的資訊。欲被合成的背景場景需要經由虛擬攝影機參數的計算。此外,為了保持場景物體間互相遮蔽的關係,在背景置換時深度競爭(Depth Competition)是另外一個被探討的議題。當我們將靜態影像延伸至視訊時,我們需要攝影機移動行為的資訊來補償不同場景間攝影機的移動不匹配問題。實驗結果顯示我們可以達成令人滿意的視覺觀感。zh_TW
dc.description.abstract3D video is gaining its popularity recently. In addition to the conventional left-right view 3D pictures, new forms of 3D video such as free viewpoint TV (FTV) and augmented reality (AR) are introduced. The Depth Image-based Rendering (DIBR) technique is one enabling rendering technique behind these applications. Typically, it uses multiple views with depth information to generate the intermediate view at any arbitrary viewpoint. We can use the DIBR techniques to produce new stereo videos with background substitution. Given two sets of videos captured by two sets of multiple cameras, we like to combine them to create a new stereo scene with the foreground objects from one set of video and the background from the other set. We will study a few mismatch issues between two scenes such as camera parameter mismatch and camera orientation mismatch problems in this thesis. We propose a floor model to adjust the camera orientation. Once we pick up the landing point (of foreground) in the background scene, we need to adjust the background camera parameters (position etc.) to match the foreground object, which enriches the freedom of composition. In contrast to the conventional 2D composition methods, the depth information is used in the above calculation. Thus, the new background scenes may have to be synthesized based on the calculated virtual camera parameters and the given background pictures. The depth competition problem is another issue to maintain the inter-occlusion relationship in the composite new scene. If we extend this 3D composition form still pictures to motion pictures, we need the camera movement information too. The camera motion is estimated for individual scene to solve the mismatch of camera motion of two scenes. Plausible results are demonstrated using the proposed algorithms.en_US
dc.language.isoen_USen_US
dc.subject影像合成zh_TW
dc.subject視角合成zh_TW
dc.subject分割zh_TW
dc.subject不匹配zh_TW
dc.subject深度競爭zh_TW
dc.subject攝影機軌跡zh_TW
dc.subjectimage compositionen_US
dc.subjectview synthesisen_US
dc.subjectsegmentationen_US
dc.subjectmismatchen_US
dc.subjectdepth competitionen_US
dc.subjectcamera motionen_US
dc.title基於虛擬視角的立體影片合成zh_TW
dc.titleVirtual-view-based Stereo Video Compositionen_US
dc.typeThesisen_US
dc.contributor.department電子工程學系 電子研究所zh_TW
顯示於類別:畢業論文


文件中的檔案:

  1. 020601.pdf
  2. 020602.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。