完整後設資料紀錄
DC 欄位語言
dc.contributor.author徐彥睿en_US
dc.contributor.authorHsu, Yan-Rueyen_US
dc.contributor.author李鎮宜en_US
dc.contributor.authorLee, Chen-Yien_US
dc.date.accessioned2015-11-26T00:57:11Z-
dc.date.available2015-11-26T00:57:11Z-
dc.date.issued2015en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070250189en_US
dc.identifier.urihttp://hdl.handle.net/11536/126973-
dc.description.abstract由於大數據時代的來臨,與大數據相關的機器學習演算法已被廣泛的使用在各個領域上。受限於儲存設備和運算能力,隨著資料量的不斷膨脹,在單一運算設備如個人電腦上處理機器學習演算法已不切實際。本論文提出一個由嵌入式系統所建構的平行運算叢集,並在上運行Hadoop系統作為處理大數據的方案。 另外,透過軟硬體協作的設計方法,更進一步地利用FPGA來加速在Hadoop中執行的運算。在硬體層級中,提供了多個可配置的基本運算模組。使用者可在軟體層級將所需要的運算交由韌體,並透過韌體層級的協作來組織所需要的硬體完成計算。軟硬體的搭配使運算叢集有更強大的運算能力和更快的運算速度。 在本論文中,我們採用了Mini-ITX開發套件來架構Hadoop叢集。Mini-ITX是一塊同時具有CPU和FPGA的開發板。運行FPGA時脈於120MHz,CPU時脈於667MHz。在運行奇異值分解於Hadoop的案例中,相比沒有使用硬體加速的情況,使用硬體計算約可得到約7.9%的加速。若以特殊應用積體電路(ASIC)完成硬體設計並運行於更高的時脈,硬體加速方法可獲得更佳的效果。zh_TW
dc.description.abstractSince living in the big data era, the related Machine Learning (ML) algorithms are widely adopted in several fields. With the increasing of data size, it is not practical that processing Machine Learning algorithms in single node due to the limitations of storage capacity and calculating ability. In this thesis, a parallel computing cluster constructed by embedded systems has been proposed. A MapReduce framework based system called Hadoop is also implemented to the cluster. Hadoop system is considered as a solution to deal with big data. Besides, with the help of HW/SW co-design methodology, operations in Hadoop could be further accelerated by hardware in FPGA IC. Several configurable basic operation modules are provided at the hardware level. Driver, which is the bridge between software and hardware, receives data to be calculated from software and configures appropriate modules at hardware level to accomplish task. Cooperating with driver, users could define their own operations at software level and the executing duty is achieved by hardware. HW/SW co-design methodology not only gains more powerful calculating ability but reaches much faster operating speed. Mini-ITX development kit is adopted to construct Hadoop cluster in this thesis. Mini-ITX is a board which combines both CPU and FPGA IC. In Singular Value Decomposition case, operating FPGA at 120MHz and CPU at 667MHz, hardware acceleration approach could gain 7.9% acceleration comparing to purely applying CPU to calculate. If hardware could be implemented by ASIC design methodology which is able to executing at higher clock frequency, hardware acceleration approach could further achieve much better performance.en_US
dc.language.isozh_TWen_US
dc.subject大數據zh_TW
dc.subject嵌入式系統zh_TW
dc.subjectHadoop運算叢集zh_TW
dc.subject機器學習zh_TW
dc.subjectBig dataen_US
dc.subjectMachine learningen_US
dc.subjectHadoop Clusteren_US
dc.subjectEmbedded Systemen_US
dc.title針對大數據分析之建構於可程式邏輯板Hadoop系統設計zh_TW
dc.titleDesign of an FPGA-Based Hadoop System for Big Data Analysisen_US
dc.typeThesisen_US
dc.contributor.department電子工程學系 電子研究所zh_TW
顯示於類別:畢業論文