標題: 基於動態隨機存取存儲器之中等粒度緩存與主記憶體之投機訪問命令
Mezzo-Grained DRAM Cache with Main Memory Speculative Access Command
作者: 周凱
陳添福教授
Eric, Joel Rosales Reyes
Chen, Tien-Fu
電機資訊國際學程
關鍵字: 主記憶體;隨機存取存儲器;DRAM Cache;Dual Granularity;Speculative Access
公開日期: 2016
摘要: No needed
With the rise of Die-stacking technology which has become the basis for Large Last Level Caches, performance benefits have been delivered but this new trend has also brought about some inefficiencies as well as expensive auxiliary hardware structures. One such structures is MissMap, which stores a vector of block-valid bits for each “page” in the DRAM cache [4]. While MissMap provides a far more practical approach than using a massive SRAM tag array, [3] its implementation cost is too high to allow it to succeed in the market (e.g., 4MB for a 1GB DRAM cache) [22]. Another such structure is Bi-Modal DRAM cache which allows the cache system to employ two different granularities. While Bi-Modal offers flexibility when deciding what block size to use, its block predictor and way locators are constantly running, incurring energy overhead. The limitation of their technique comes from the additional complexity of organizing the data in bi-modal manner. [18] We propose a DRAM cache that eliminates the large MissMap structure, and yet still provides a prediction accuracy close to that of MissMap. At the same time, it also decides what granularity to use without the need of extra hardware since the saturating counters in the predictors can also keep track of the block size required at the moment. Instead of using small block sizes like Bi-Modal our system uses mezzo granularity (256B-512B) thus reducing the tag overhead caused by Bi-Modal. Moreover, as an attempt to reduce miss and hit latency, our system employs a parallel access to tags comparison and prediction, allowing us to reduce up to 15ns per request. Our predictor also contains a tags column predictor that helps reduce the tags lookup. Also, by using a speculative access to main memory we ensure to keep our predictor away from false negatives. Our evaluations reveal that Mezzo Cache improves performance by 30% compared to the baseline model (Loh&Hill) due to the speculative access Memory scheme. Overall our design not only reduces the area overhead - compared to MissMap - but also improves the requests speed up and all of this happens only when the page has reached its hit phase thus allowing us to save energy.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070360827
http://hdl.handle.net/11536/139305
Appears in Collections:Thesis