標題: | 同儕網路串流系統中頻寬與延遲感知的傳輸速率分配機制 Bandwidth and Latency Aware Send Rate Allocation in P2P Streaming System |
作者: | 黃銘祥 Huang, Ming-Hsiang 曾建超 Tseng, Chien-Chao 網路工程研究所 |
關鍵字: | 同儕網路串流系統;網狀架構;壅塞控制;傳輸速率分配;P2P Streaming system;Mesh structure;Congestion control;Send rate allocation |
公開日期: | 2010 |
摘要: | 本論文針對網狀同儕網路串流系統(Mesh P2P streaming system),提出一套根據各個節點(Node)在系統中的貢獻來分配傳輸速率的機制,以增加每個節點的頻寬利用率、減少影像傳輸延遲以及達成高貢獻獲得高接收率的回饋目標。本論文提出的方法中,每個提供者(Provider)會先根據各個接收者(Receiver)的上傳頻寬、提供者到該接收者的連線延遲(Link latency),以及該接收者所要求資料的延遲程度,計算該接收者對系統的貢獻,再依據不同接收者的貢獻調整壅塞控制(Congestion control)的參數,使得貢獻大的接收者在競爭頻寬時能夠從提供者分配到較多的傳輸速率。
已有研究指出,在同儕網路串流系統的各種架構中,網狀架構(Mesh)的同儕網路串流系統在各方面表現都是最好。但是網狀同儕網路系統只依靠各個節點互相隨機連線,沒有考慮各節點能對系統做出的貢獻,因此造成了以下問題:第一,大頻寬的節點不一定能收到較多資料,以致大頻寬的節點沒有足夠資料可傳輸,因而無法充分利用頻寬。第二,資料未先傳給大頻寬或離提供者較近的節點,會導致需要更長的路徑和更久的時間才能散布到整個系統,因而增加了播放延遲(Playout delay)。第三,未考慮先傳送較新的資料,導致系統中傳播的資料延遲增加,第四,上傳較多資料的節點不一定能下載較多資料,也產生貢獻回饋失衡的問題。
先前雖然有一些研究嘗試解決這些問題,但是它們通常只考慮單一條件,例如只考慮接收者的頻寬或是資料的延遲程度來決定傳輸的優先權,而沒有研究是將接收者頻寬、連線延遲和資料延遲程度一起綜合考慮。此外,先前的研究都沒有考慮節點間的頻寬對傳輸速率分配的影響,只是簡單的測量自己的頻寬,再根據自己的頻寬,分配傳輸速率給其他節點。亦即,沒有考慮不同節點間頻寬可能不同,只是很理想化的假設,每個節點的下載頻寬都足夠承受來自任何節點的資料,這樣理想化的假設並不能套用在真正的網路中。
因此,本論文針對網狀同儕網路串流系統提出一套根據各節點貢獻來分配頻寬的機制,本論文提出的機制包含以下兩部分:(一)節點貢獻估算法:根據每個節點的上傳頻寬、連線延遲以及該節點要求的資料延遲程度,來評估該節點貢獻的計算方法。以及,(二)壅塞控制機制:探索提供者跟接收者間的頻寬,當提供者到兩個接收者的路徑有共同的壅塞連接(Shared congested link)時,貢獻大的接收者永遠能夠競爭到較多的傳輸速率。
為了驗證本論文提出的方法,我們使用了 NS2 模擬器進行模擬,並與Coolstreaming以及 Prime 作比較,實驗結果顯示本論文提出的方法在大部分環境表現都比 Coolstreaming和 Prime 好。 In this thesis, we proposed a send rate allocation mechanism in mesh P2P streaming system that can increase the bandwidth utilizations, reduce the playout delays, and augment the rewards of nodes’contributions in the system. The underlying idea of the proposed mechanism is that a provider allocates its send rate to a receiver according to the contribution of the receiver, A provider will first estimate the contribution of each of its receivers according to the receiver’s upload bandwidth, the link delay to the receiver, and the delay of the contents requested by the receiver. Then, the provider will adjust the congestion control parameters of each receiver according to the contribution of the receiver, and thus allocate higher send rates to the receivers with larger contributions. Previous research shows that mesh architectures outperforms others in P2P streaming. However peers in previous mesh P2P streaming systems just randomly connects to and requests data from one another, without considering how much contribution a peer can make for the system. The neglect of peers’ contributions in P2P streaming may cause the following problems: first, peers with large bandwidth may not have enough data, and thus can not utilize their bandwidth to transmit data to more peers. Second, providers do not send data to those peers with high bandwidths or low link delays from the providers, so that it takes more time, and possibly longer paths, to disseminate data to all peers across the P2P streaming system. Third, providers do not send new data with higher priorities, and thus more old data swarm in the system. Fourth, peers that contribute more upload bandwidth do not receive comparable rewards. Some researchers have tried to resolve some of the above problems. However they consider either peer upload bandwidths or content delays only, but no both at the same time. Furthermore, the upload bandwidth they considered is the bandwidth measured by a provider. However, the bandwidth between a provider and a receiver depends on the routing path between the two peers, not just the upload bandwidth of the provider. In addition, the link latency between a provider and a receiver also contributes the playout delays. Therefore, in our design, the contribution of a receiver peer depends on three factors, that is, the upload bandwidth of the peer, the link latency between the receiver and its provider, and the delay of the content requested by the receiver. The send rate allocation mechanism we propose contains two parts: a contribution estimation method and a congestion control mechanism. In our design, each peer executes the contribution estimation method to determine the contributions of its neighbors periodically when it receives the information of the neighbors. A provider then uses the congestion control mechanism to allocate send rates for requesting peers in accordance of the contribution of the peers. With the congestion control, a provider will increase the send rate to a peer when the provider receives all acknowledgements from the peer, and decrease send rate when the provider detects packet losses. However, a peer with a larger contribution will increase more but decrease less than the peers with smaller contributions. As a consequence, the congestion control mechanism can explore the available bandwidth among peers, and when the paths from a provider to several peers share a congested link, the provider will allocate more send rates to the peers with larger contributions. We conduct simulations with NS2 simulator to evaluate our send rate allocation proposal. The simulation results show that the congestion control can indeed allocate more bandwidth to the peers that contribute more. Furthermore, we compare our allocation method outperforms with Coolstreaming and Prime in terms of the average peer receive rates and the source-to-peer transmission delays. The results show that in most environments, peers in our proposal can have higher receive rates with lower source-to-peer transmission delays compared with the ones in Coolstreaming and Prime. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079756504 http://hdl.handle.net/11536/45994 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.