標題: Scalable Power Management Using Multilevel Reinforcement Learning for Multiprocessors
作者: Pan, Gung-Yu
Jou, Jing-Yang
Lai, Bo-Cheng
交大名義發表
National Chiao Tung University
關鍵字: Design;Algorithms;Performance;Management;Dynamic power management;multiprocessors;reinforcement learning
公開日期: 1-Aug-2014
摘要: Dynamic power management has become an imperative design factor to attain the energy efficiency in modern systems. Among various power management schemes, learning-based policies that are adaptive to different environments and applications have demonstrated superior performance to other approaches. However, they suffer the scalability problem for multiprocessors due to the increasing number of cores in a system. In this article, we propose a scalable and effective online policy called MultiLevel Reinforcement Learning (MLRL). By exploiting the hierarchical paradigm, the time complexity of MLRL is O(n lg n) for n cores and the convergence rate is greatly raised by compressing redundant searching space. Some advanced techniques, such as the function approximation and the action selection scheme, are included to enhance the generality and stability of the proposed policy. By simulating on the SPLASH-2 benchmarks, MLRL runs 53% faster and outperforms the state-of-the-art work with 13.6% energy saving and 2.7% latency penalty on average. The generality and the scalability of MLRL are also validated through extensive simulations.
URI: http://dx.doi.org/10.1145/2629486
http://hdl.handle.net/11536/25073
ISSN: 1084-4309
DOI: 10.1145/2629486
期刊: ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS
Volume: 19
Issue: 4
結束頁: 
Appears in Collections:Articles


Files in This Item:

  1. 000341232600002.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.