標題: | 在雲端儲存基礎上的內容傳遞網路 Cloud Based Content Delivery Network |
作者: | 張智維 Chang, Chih-Wei 袁賢銘 Yuan, Shyan-Ming 資訊科學與工程研究所 |
關鍵字: | 雲端計算;雲端儲存;內容傳遞網路;Hadoop;Cloud Computing;Cloud Storage;Content Delivery Network;Hadoop |
公開日期: | 2010 |
摘要: | 網站開發者為了減少維護成本以及其他因素,會把網站放在提供架設虛擬主機的業者上,但是當網頁內容以及網路流量的增加,就很難提供好的瀏覽服務而且系統本身管理大量內容上也變得複雜且困難。 所以提供架設網站的系統需要有龐大的儲存空間、高效能、高擴展性以及建置成本低,去滿足使用者。 而因應雲端計算、分散式檔案儲存系統、分散式資料庫的趨勢,越來越多使用者將網站資料內容遷移放置在外部的雲端服務上,既能降低維護成本又能有高可用性。 隨著大型網站應用全球化服務,如何透過在全球各地所建構的網路節點與雲端儲存服務的合作,提供全球網路用戶低成本、即時、高效能的內容存取,是從單純資料儲存衍生出內容傳遞網路的議題。
本篇論文中,將提出結合雲端儲存和內容傳遞網路的系統架構。 此架構可以低成本、高彈性地建置網站服務。 提供網站開發者方便管理網頁內容,以及加速使用者訪問網站的速度。 利用Hadoop Distributed File System當作雲端儲存的服務,但節點並不只集中於一個資料中心,可分布於全球各資料中心,將內容提供者的資料內容放置於離使用者最近的節點上,並且透過內容傳遞網路服務再進一步加速使用者訪問的速度。
以往Hadoop Distributed File System各節點通常是建置在同一個資料中心,現在分散於各資料中心,為了加速存取檔案內容時間以及減少訪問主節點的負擔,在各節點快取檔案位置,根據實驗結果可以增加使用者訪問的速度。 對CDN提供者可依照價位或機房位置,很彈性地選擇虛擬主機商來建構私有的CDN服務。 As web content and the internet traffic increases, it is difficult to provide good service for end users, and then management of the large number of content will become complex and difficult. Therefore, web system need to provide a huge of volume of data storage, high performance, high scalability and low cost. Currently, the trend of cloud computing, distributed file system, distributed database, more and more Web sites move their contents to the external cloud services such as Amazon S3(Simple Storage Service) to reduce the cost of hardware and maintenance. In the other side, the access to large web applications is from nodes around the world. Providing low cost, real-time, and high efficiency content access method with the help of controlled nodes deployed in different locations is a new issue of content delivery network instead of simply content access. In this paper, I propose architecture combining cloud storage and content delivery network system. It has Low cost and elastic web serving cloud platform. It’s also easy to manage for Web content developers, and to accelerate the speed of the user to access the site. Using HDFS(Hadoop Distributed File System) as a cloud storage service, the nodes of HDFS could not be only built on one data center, but also could be built on the more data centers around the world, so contents of the content provider has placed on the node of the closest end users, and through content delivery network service to accelerate further the speed of user access. The nodes of HDFS are usually built on the same data center before the nodes. Now, the nodes distribute in various data centers in order to accelerate the speed of accessing contents and to reduce the loading of the master node. Besides, the system caches the location of contents on each node. The experimental results can accelerate the speed of accessing web site obviously. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079855534 http://hdl.handle.net/11536/48270 |
Appears in Collections: | Thesis |