完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 潘畊宇 | en_US |
dc.contributor.author | Pan, Gung-Yu | en_US |
dc.contributor.author | 周景揚 | en_US |
dc.contributor.author | 賴伯承 | en_US |
dc.contributor.author | Jou, Jing-Yang | en_US |
dc.contributor.author | Lai, Bo-Cheng | en_US |
dc.date.accessioned | 2015-11-26T00:57:11Z | - |
dc.date.available | 2015-11-26T00:57:11Z | - |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://140.113.39.130/cdrfb3/record/nctu/#GT079711595 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/126976 | - |
dc.description.abstract | 在現代社會中,隨著各種智慧型裝置的普及,為使用電池的嵌入式系統發展系統層級的電源管理機制﹝system-level power management﹞就顯得更加重要,一方面透過動態電壓頻率調整﹝Dynamic Voltage and Frequency Scaling, DVFS﹞將低使用率的處理器核心降頻省電,另一方面則透過動態電源管理﹝Dynamic Power Management, DPM﹞來自動關閉空轉中的元件。由於整體系統架構越來越複雜,設計電源管理策略時就需要特別考慮效率﹝efficiency﹞及效果﹝effectiveness﹞兩個問題。除此之外,電源管理策略還要能隨時適應外在環境的變化,並且避免使用者需要時常手動調整設定。這篇論文的目標是要設計出一套完整且輕量的電源管理策略,使得未來的多核心智慧型裝置也能有效率的運作,且這套演算法將具有自動適應環境及使用者習慣的能力,能夠更有效處理日益複雜的資訊並大幅降低處理器及周邊元件的功率消耗,以達到省電、節能、方便、環保等目標。在這篇論文中,電源管理策略的發展範疇將先著重在多核心處理器上,再將方法擴展到整個智慧型裝置系統上。 由於處理器的核心數目與日俱增,指數擴大的尋解空間使得電源管理策略的可延展性﹝scalability﹞就變得非常重要。這篇論文為多核心處理器提出了兩個具備高度可延展性的演算法。針對DVFS,首先在僞多項式時間﹝pseudo-polynomial time complexity﹞中建立一個保證最佳解的電源模式組合表,接著在線性時間﹝linear time complexity﹞中分配給不同的運算核心。另一方面針對DPM,將機器學習引擎﹝learning engine﹞結合多層次架構﹝multi-level framework﹞,使得決定及更新兩個步驟都能在線性對數時間﹝linearithmic time complexity﹞中完成,並且更進一步透過消除多餘尋解空間﹝solution searching space﹞來提升收斂速度。與學術上最新的演算法相比,組合最佳化演算法能在任何給定的功耗要求下達到更快的效能,且最高加速達到125倍,而多層次強化學習演算法的執行速度也比先前的方法快了53%,且達到13.6%的省電效果同時僅有2.7%的系統效能的損失。 智慧型裝置的強大功能卻成為電源管理員的沉重負擔,更多的輸入及輸出導致更大的尋解空間,增加運算負擔及收斂時間。由於大多數的智慧型裝置都有連接網際網路,這篇論文提出了一個利用雲端運算﹝cloud computing﹞增進機器學習效果的方法。複雜的機器學習引擎將移到雲端進行計算以降低負擔,而訓練樣本則可以在不同裝置上共用以加快學習速度。因此,當有一千個同型裝置共用雲端資源時,這篇論文提出的方法能夠在短短數個週期內就達到收斂。我們亦將此方法實作成一個安卓應用服務﹝Android App﹞,量測出的執行時間僅占整個系統的0.01%而已。 這篇論文提出的電源管理策略,不僅限於應用在目前的系統上,更能在任何未來能夠連接到網際網路的智慧型裝置上實現,並且進一步延伸到其他的環境底下,比如說異質多核心架構,並考慮諸如熱度及變異性等議題。在不久後的將來,這套架構更能夠套用在物聯網﹝Internet of Things, IoT﹞並建立智慧家庭的願景。 | zh_TW |
dc.description.abstract | As smart devices become popular in nowadays community, the development of system-level power management policies is crucial for battery-powered embedded systems, such that low-utilized processors are slowed-down using dynamic voltage and frequency scaling (DVFS) and idle components are turned off using dynamic power management (DPM). Due to the growing number of components and the divergent information of input contexts, the efficiency and the effectiveness should be extraordinarily considered when designing power management policies for future smart devices. Besides, the power managers should be adaptive to the environments and autonomous to the users. In this dissertation, a comprehensive power management policy is developed for future smart devices, such that the multiprocessors and components are energy-efficient, while the power managers are autonomous and light-weight. The proposed policy focuses on the multiprocessors first, and then extends to the whole system of smart devices. Due to the increasing number of cores in a system, the policy scalability has become critical when the searching space expands exponentially. Two highly scalable algorithms are proposed for multiprocessors. The DVFS-driven combinatorial algorithm first constructs an optimum mode combination table in pseudo-polynomial time, and then assigns to cores with the minimum transition cost in linear time. The DPM-driven learning engine exploits the multi-level paradigm to decide and update in linearithmic time, and raises the convergence rate by compressing redundant searching space. Compared with state-of-the-art policies, the combinatorial optimization policy achieves better performance for any given power budget with up to 125X speedup, and the multi-level reinforcement learning policy runs 53% faster and achieves 13.6% energy saving with only 2.7% latency penalty on average. The powerfulness of smart devices burdens the power manager with more input/output and larger searching space. Since most smart devices are connected to Internet, the sophisticated learning engine is offloaded to the cloud in order to lessen the overhead, and the training samples are shared between different devices to accelerate the learning process. As a result, when one thousand same-model devices are connected to the cloud, the proposed policy is able to converge within a few iterations. Besides, the measured overhead is only 0.01% of the system time when implemented as an Android App. The policy in this dissertation is not restricted to current systems but any future smart device connecting to Internet with more considerations, such as heterogeneous architectures, thermal and variation issues. Furthermore, this framework can be further applied to Internet-of-Things (IoT) and home automation in the near future. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | 電源管理 | zh_TW |
dc.subject | 機器學習 | zh_TW |
dc.subject | 雲端運算 | zh_TW |
dc.subject | 多核心系統 | zh_TW |
dc.subject | power management | en_US |
dc.subject | machine learning | en_US |
dc.subject | cloud computing | en_US |
dc.subject | multiprocessor systems | en_US |
dc.title | 在多核心智慧型裝置上結合雲端運算及機器學習演算法所實現的電源管理策略 | zh_TW |
dc.title | A Learning-on-Cloud Power Management Policy for Multiprocessor Smart Devices | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 電子工程學系 電子研究所 | zh_TW |
顯示於類別: | 畢業論文 |