Full metadata record
DC FieldValueLanguage
dc.contributor.author李柏穎en_US
dc.contributor.authorBo-Yin Leeen_US
dc.contributor.author莊榮宏en_US
dc.contributor.authorDr. Jung-Hong Chuangen_US
dc.date.accessioned2014-12-12T02:25:02Z-
dc.date.available2014-12-12T02:25:02Z-
dc.date.issued2000en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#NT890392042en_US
dc.identifier.urihttp://hdl.handle.net/11536/66832-
dc.description.abstract本論文結合3D幾何式與2D影像式顯像之優點,透過物體自身遮蔽誤差的估計,配合空間分割、多層次物體深度網格與快取影像、多層精細度顯像、影像曲化等技術,加上保守性的遮蔽性裁切與背向裁切,發展出一顯像時間能與場景複雜度較無關的虛擬實境瀏覽系統。整個系統分前置處理與執行兩階段。前置處理階段首先將整個場景作蜂巢式的空間分割成為正六邊形的瀏覽子空間,再對每一個瀏覽子空間外的物體估計物體遮蔽誤差。對於遮蔽誤差小的物體利用快取影像配合物體原始幾何資料產生該物體的深度網格。對於遮蔽誤差大的物體則事先選擇適當精細度的漸進式模型並執行背向裁切。並針對所有瀏覽子空間外的物體進行保守性的遮蔽性裁切。執行階段則是依據分類的結果顯示物體。先進行即時遮蔽性裁切,然後對於利用快取影像與深度網格的物體經過影像曲化與重新投影的運算,進行影像式顯像,而瀏覽子空間內的物體經過即時背向裁切後與瀏覽子空間外的物體都進行幾何式的顯像。透過這樣的顯像方法可以獲得比原有顯像畫面略差但是顯像速度快而平穩的瀏覽效果。zh_TW
dc.description.abstractIn this thesis, we combine geometry-based and image-based rendering techniques to develop a VR navigation system that will have efficiency relatively independent of the scene complexity. The system has two phases. In the preprocessing phase, the x-y plane of a 3D scene is partitioned into equal-sized hexagonal cells, called navigation cells. Then for each navigation cell, we associate each object outside the cell with two representations according to its occlusion-error. For an object with error larger than a threshold, we associated it with a progressive LOD model of an appropriate level. For an object with error small than the threshold, we associated it with a mesh that is reduced from the original mesh based on the silhouette and depth-feature pixels on its rendered image. The LOD mesh is further reduced by a conservative back-facing culling. All meshes are finally culled by a conservative visibility operation that removes those meshes occluded by others when viewed from any point inside the cell. At run-time phase, we perform an occlusion culling for each view. The objects associated with depth mesh and cached images are rendered by using hardware-supported texture mapping. Back-facing polygons of objects inside the cell are first culled by using pre-computed normal clusters and then rendered normally. The experiment reveals that the proposed method yields a much faster navigation frame rate with only a little quality loss.en_US
dc.language.isozh_TWen_US
dc.subject多層精細度模型zh_TW
dc.subject快取影像zh_TW
dc.subject物體深度網格zh_TW
dc.subject物體遮蔽性誤差zh_TW
dc.subject混合式顯像技術zh_TW
dc.subject可見性裁切zh_TW
dc.subject保守式背向裁切zh_TW
dc.subject漸進式模型zh_TW
dc.subjectLevel-of-detailen_US
dc.subjectImage Cacheen_US
dc.subjectObject Depth Meshen_US
dc.subjectObject Occlusing Erroren_US
dc.subjectHybrid Renderingen_US
dc.subjectVisibility Cullingen_US
dc.subjectconservative Back-facing Cullingen_US
dc.subjectProgressive meshen_US
dc.title利用空間分割個別物體之深度網格與遮蔽裁切的複雜場景顯像技術zh_TW
dc.titleRendering Complex Scenes Based on Spatial Subdivision, Object-Based Depth-Mesh and Occlusion Cullingen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
Appears in Collections:Thesis