標題: 多執行緒多處理器網路處理器之資源分配--針對計算密集及記憶體存取密集的網路應用程式
Resource Allocation in Multithreaded Multiprocessor Network Processors for Computational Intensive and Memory Access Intensive Network Applications
作者: 林義能
Yi-Neng Lin
林盈達
Ying-Dar Lin
資訊科學與工程研究所
關鍵字: 網路處理器;資源分配;network processor;resource allocatin
公開日期: 2006
摘要: 今日網路應用程式之處理需要強大的硬體平台以應付日益龐大的計算量以及記憶體存取。此平台亦必須能夠隨著協定或產品規格之變動而作有效的調整。沿用已久的多用途處理器架構,其效能往往被“核心-使用者程式”間的溝通以及執行緒轉換的負擔拖累;而常用的ASIC解決方式則受限於開發時程過久且調整不易的缺陷而無法滿足需求。 本篇論文主要探討(1)應用日益盛行的網路處理器架構來加速網路網路封包處理的可行性,此網路處理器包含多個處理器且每個處理器包含多個硬體執行緒,具有豐富硬體資源、較小的執行緒轉換負擔以及可調整性等優點,和(2)用此平台來處理不同計算或記憶體存取量的網路應用程式時硬體資源的分配。我們首先檢視各種不同的網路處理器並將其分成“助理處理器為主”和“核心處理器為主”兩大類。就前者而言,助理處理器負責占封包處理主要工作的資料面象部分,而後者則是由核心處理器兼顧所有的控制面象和大部分的資料面象的處理。之後我們針對計算密集以及記憶體存取密集的網路應用程式分別用“助理處理器為主”和“核心處理器為主”的兩種網路處理器來實作並評估其效能。最後,根據實作的經驗我們進一步設計出其數學模型以及模擬環境,以期能找出設計、使用此二種架構時的參考。
Networking applications today demand a hardware platform with stronger computational or memory access capabilities as well as the ability to efficiently adapt to changes of protocols or product specifications. Being the ordinary options, however, neither a general purpose processor architecture, which is usually slowed down by kernel-user space communications and context switches, nor an ASIC, which lacks the flexibility and requires much development period, measures up. In this thesis, we discuss (1) the feasibility of applying the emerging alternative, network processors featuring the multithreaded multiprocessor architecture, rich resources, minor context switch overhead, and flexibility, to solve the problem, and (2) the ways of exploiting those resources when dealing with applications of different computational and memory access requirements. We start by surveying network processors which are then categorized into two types, the coprocessors-centric and the core-centric ones. For the former, the coprocessors take care of the data plane manipulation whose load is usually much heavier than the one of the control plane, while in the latter the core processor handles the most part of packet processing, including the control plane and data plane. After that we evaluate real implementations of computational intensive and memory access intensive applications over the coprocessors-centric and core-centric platforms, respectively, aiming to unveil the bottlenecks of the implementations as well as the allocation measures. Finally, based on the evaluations, analytical models are formalized and simulation environments are built to observe possible design implications for these two types of network processors.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT009023818
http://hdl.handle.net/11536/82546
Appears in Collections:Thesis


Files in This Item:

  1. 381801.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.