標題: 混熱擾流力與分子動能模擬之 GPU 加速
GPU-Acceleration of the Hybrid Fluctuating Hydrodynamics and Molecular Dynamics Simulation
作者: 白托馬
鍾崇斌
朱智瑋
Thomas, Pak
Chung, Chung-Ping
Chu, Jhih-Wei
電機資訊國際學程
關鍵字: GPU;混合模型;分子動能模擬;混熱擾流力;GPU;hybrid model;molecular dynamics;fluctuating hydrodynamics
公開日期: 2016
摘要: 分子層級的流體性質經常以全原子模擬(all-atom simulations)的方法進行研究,該方法以古典力學描述每一個原子的動態行為,因此為目標研究系統提供最清楚的動態資訊。另一方面,流體在巨觀的行為則視流體為一連續的變量,並藉由流體動力學方程式(hydrodynamic equations)預測隨時間下流體的表現。奈米層級下的探討則需要結合介於上述兩種尺度下的模型來進行分析。此一新的耦合系統對每個粒子配置了與其相對應的網格。粒子對場變數、或反之場變數對粒子的配對是透過粒子與場網格之內插來計算。然而此一耦合演算法尚未建立於高效能運算架構之下。 近年來,圖形處理器(graphics processing units, GPUs)儼然成為科學計算上富競爭的平台。原本GPU是為電腦繪圖而設計,但是GPU架構現在被優化於處理計算密集型任務與高通量資料。比起傳統的高效能運算叢集系統,上述特性使得GPU更為吸引人且計算更有效率。因此新的運算架構即選擇了通用圖形處理器(GPU–CPU)之架構來進行混合模型之高效能運算。 本篇碩士論文討論以兩種力學的混合模型在GPU加速模擬下的設計與實踐。目的是要將原先之CPU演算架構改建置於可大量同步控制、以達到最高加速運算可能性之GPU演算架構。此一新的GPU演算架構使用共享記憶體為暫存區域以對耦合系統進行快速地局部內插運算。兩階段執行緒對應將額外內存空間之使用降到最低以達到最大限度地提高運算處理量。藉由徹底地增加模擬的計算效率,利用混合模型所探索的時空間尺度將可以被大幅的增加。
Fluid properties at the molecular scale are often investigated using all-atom simulations, which provide the highest level of detail attainable using classical mechanics. On the other hand, the behavior of fluids at the macroscopic scale is modeled by approximating the fluid as a continuous quantity and tracking its evolution by hydrodynamic equations. At the nanoscale both of these modeling paradigms are necessary. A hybrid model implementing molecular dynamics and hydrodynamics has previously been designed for simulations of nanoscale fluids. It implements a novel coupling scheme that associates a collocating grid with each particle. The mapping of particle to field variables and vice versa is then achieved through interpolation of particle and field grids. However, the coupling algorithm has not yet been adapted for high-performance computing (HPC). In recent years, graphics processing units (GPUs) have emerged as a competitive platform for scientific computations. Originally designed for computer graphics, the GPU architecture is optimized for computationally intensive tasks and high data throughput. These features make them attractive and cost-effective alternatives compared to traditional HPC clusters. Therefore, a GPU–CPU framework was chosen as the HPC platform for the hybrid model. This thesis thus presents the design and implementation of a GPU-accelerated simulation of the hybrid model. The objective was to reformulate the original CPU algorithms to expose massive concurrency, implement them on the GPU and achieve the highest computational speedup possible. A novel GPU algorithm was designed for the coupling scheme that uses shared memory as a staging area to perform fast local interpolations. To maximize computational throughput, a two-stage thread mapping was employed with a minimal amount of additional memory overhead. By drastically increasing the computational efficiency of simulations, the spatial and temporal scales that can be explored using the hybrid model were greatly expanded.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070460818
http://hdl.handle.net/11536/139212
Appears in Collections:Thesis