標題: GPU為網路協定TCP/IP卸載引擎資料直接傳送的實現與研究
Study and Implement Direct Data Transfer for GPU as TCP/IP Offload Engine
作者: 彭美僑
Peng, Mei-Chiao
袁賢銘
Yuan, Shyan-Ming
資訊學院資訊學程
關鍵字: GPU;tcp/ip 卸載引擎;GPU;tcp/ip offload engine;GPUDirect
公開日期: 2013
摘要: 隨著GPU(Graphics Processing Unit) 運算處理速度越來越快,GPU每秒可處理的浮點運算已遠遠超過CPU,GPU不只運用在圖形處理上,更廣泛運用在許多需要大量資料處理的應用上。 在GPU運算處理速度與乙太網路的速度都越來越快的情況下,我們希望將乙太網路的封包,由原本的CPU處理轉交由GPU處理,以加快系統對於網路的處理速度,但目前在PC上, 建構GPU處理乙太網路的封包,第三方設備網路卡上的資料到顯示卡GPU的資料傳輸路徑,必須經過系統記憶體,才能到顯示卡記憶體,會造成傳輸路徑的延遲 。 為增快此架構下, 網路系統的處理,GPU需要更快的拿到第三方設備網路卡的封包資料,為了提升顯示卡系統對於第三方設備資料傳輸的頻寬,目前GPU產品和相關的研究,提出了GPU與第三方設備使用DMA資料直接傳送相關的函式庫,讓GPU能更快速的拿到第三方設備上的資料做運算,並快速的將GPU運算的結果送回到第三方設備,改變了以往,需要將資料從第三方設備傳到系統記憶體,再從系統記憶體傳到GPU記憶體的延遲。 本論文使用NVIDIA 提供的GPUDirect函式庫,建構顯示卡記憶體與第三方設備網路卡,封包資料直接傳送的環境,建立網路卡封包使用GPU 處理的初步直接傳輸環境並評估傳輸效能。
With the increasing speed of GPU computing, the GFLOPS (Giga floating point per second) of GPU exceeds CPU. GPU processor uses on not only the video processing but also numerous general purpose data processing. With the increasing speed of GPU computing and Ethernet system, GPU is required to deal with the network packets that deal with by CPU originally to increase the process speed of network system. If user wants to construct such system on PC, the data will be transferred from network card to system memory then from system memory to display card. It causes delay of transfer path. To speed up network system, GPU need obtain processing data from network card faster. The library has been proposed by nowadays research and related GPU product that transferring data by DMA between GPU and 3rd party device directly. It changes the previous path that data need to be transferred to system memory first. CUDA GPUDirect library provided by NVIDIA is used to construct the environment that data transferred between display card and network device directly. Performance evaluation is done on this platform.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079879535
http://hdl.handle.net/11536/74067
Appears in Collections:Thesis