標題: 有效率的動態雲層顯像技術
Efficient Dynamic Clouds Rendering
作者: 廖宏祥
Horng-Shyang Liao
莊榮宏
林正中
Jung-Hong Chuang
Cheng-Chung Lin
資訊科學與工程研究所
關鍵字: 動態雲層顯像;自然現象;雲層模擬;Dynamic Clouds Rendering;Nature Phenomena;Volume Rendering;Image-Based Rendering;Cloud Simulation
公開日期: 2001
摘要: 本論文延伸[DKY+00]的模擬方法與兩階段顯像架構,提出一套更有效率的三維動態雲層顯像方法,並擴充其模擬法則,使本論文之方法能應用於虛擬實境的場景中。在模擬上,本論文加入雲朵消失的因素和雲加水汽的上下限,並且將雲的增長與消失法則作修正,使其消長變化得以被侷限於大片雲朵之邊緣。這些新增的經驗法則,可以讓使用者在模擬開始前設定天候狀況,但是在雲的模擬過程中,不需要再加入額外的控制。在顯像上,本論文提出一個簡化的Lighting Model,將整個雲層顯像分成陰影計算與最後成像兩個階段,並且分別找出這兩個階段可以事先算好的部份,建立成陰影關聯表和貼圖資料庫。我們使用陰影關聯表找出空間中陰影會互相影響的區域,使用貼圖資料庫記錄雲朵之基本元件metaball在不同角度與密度下的貼圖影像。這樣的做法可以降低系統執行時期的顯像計算量,有效提昇雲層顯像速度。此外,本論文也將八元樹的階層式架構加到雲層模擬空間中,方便顯像系統作由近而遠的顯像順序判斷、視角範圍裁切、階層式影像選取與貼圖資訊快取。這些技術的整合,也提昇不少雲層的顯像速度。最後本論文再以整個雲層模擬與顯像系統作考量,將模擬與陰影計算放在一個處理緒,最後成像放在另一個處理緒,這個作法可以解決雲層模擬受到雲層顯像速度影響的問題,同時也將模擬和陰影計算所需的時間分散到不同畫面中。綜合以上所提及的技術,本論文的方法可以提供一個動態模擬、高效率顯像以及接近照片品質的雲層模擬與顯像系統。
Based on the clouds simulation and two pass rendering structure of [DKY+00], we propose a more efficient rendering method and extend some simulation rules. Our goal is to enhance this system for the application in 3D virtual reality. In simulation, we add cloud extinction value and upper/lower bound for total clouds and humidities. We also modify clouds growing and extinction rules such that the variations occur just on the boundary of clouds. These rules give some controllable parameters in the beginning but we don't need to modify them on the fly. In Rendering, we propose a simple lighting model and split rendering stage into two passes. The first pass is to calculate shadow and the second is to render final images. According to our lighting model, we can build Shadow Relation Table (SRT) and Metaball Lighting Texture Database (MLTDB) in preprocessing. SRT can help shadow computing to find out the shadow relationship between different voxels. MLTDB records metaball images in various angles between light vector and eye vector and in various densities. So, we can reduce lighting calculations by using SRT and MLTDB. We also use octree for representing the simulation space because we can easily incorporate back-to-front traversal and view frustum culling to our rendering system. We also can choose texture for internal node when it is far away enough and cache texture information in octree nodes. After we integrate all these techniques to our rendering system, we can get a more efficient clouds rendering speed with a little sacrifice in image quality. Finally, we put simulation and the first pass rendering in a thread and the second pass rendering in another. This is because simulation and the first pass rendering is physically time dependent. Another benefit is that simulation and the first pass rendering can split their computing time into different frames. By the proposed framework, we can get an efficient, dynamic and near photorealistic cloud simulation and rendering system.
URI: http://140.113.39.130/cdrfb3/record/nctu/#NT900392031
http://hdl.handle.net/11536/68444
Appears in Collections:Thesis