標題: 快取誤失類型辨認及其在動態預測快取誤失位置之用途
Cache Miss Type Identification and Its Use in Dynamic Miss Address Prediction
作者: 葉文涵
鍾崇斌
資訊科學與工程研究所
關鍵字: 快取誤失類型;動態預測快取誤失位置;Cache Miss Type;Dynamic Miss Address Prediction
公開日期: 2006
摘要: 微處理器的時脈隨著半導體製程與微架構技術的進步快速增加的同 時,主記憶體的時脈卻無法以相同的幅度進展,唯一的辦法是使用 快取記憶體來填補兩者間越來越大的時脈差距。因此快取記憶體的 誤失(cache miss)影響系統的效能甚巨。各種快取架構加速機制雖 然能降低快取誤失造成的延遲時間,但通常一種快取架構加速機制 只能針對某種特定的快取誤失類型。在本篇論文,我們改進現有靜 態(static time)辨認快取誤失類型的機制:將虛擬快取的置換策 略改為有限預知置換法,使辨認結果更加正確。同時,我們也利用 不同誤失類型具有不同的誤失長度與頻率的特性,提出適合在執行 時期(run time)辨認快取誤失類型的新方法,其正確性與static time相比,達到93%的正確性,並且較之前相關研究之方法更省硬 體資源,也更正確。 此外,我們也示範如何利用快取誤失類型來改善快取的效率,並以 動態預測快取誤失做為例子:當我們使用一種對於特定的快取誤失 類型較為有效的加速機制時,可以針對此種快取誤失類型再啟動此 加速機制;最後,我們整合數種快取加速機制並將它們所預測的快 取區塊放入一個共享的緩衝區,稱之為PV buffer。在我們的實驗 中它能有效改善89%的快取誤失造成的延遲。我們並利用執行時期快 取誤失類型辨認的結果,來改善其緩衝區的使用效率,以及降低多 種加速機制所造成的記憶體階層流量增加的問題。
Semiconductor technology and micro-architecture evolutions are driving microprocessors toward higher clock frequencies. Meanwhile, main memories hadn’t significantly reduced access time. To prevent large performance losses caused by long main memory latencies, micro-processors rely heavily on cache memories. Unfortunately, cache memories are not always effective due to the various cache misses. To overcome cache memories’limitations, there are several cache-based architectural optimizations which can reduce the miss penalty when cache misses. But some Optimizations cannot handle all types of misses well. In this thesis, we improve existing static time cache miss type identification scheme by using finite look-ahead replacement policy in the pseudo-cache to make identification results more accurate. We also propose two low hardware cost, low complexity cache miss type identification approaches which achieve more than 93% average identification accuracy. Then, we demonstrate the application of run time cache miss type identification by applying it to several cache-based architectural optimizations. In each case, the architecture benefits from applying different policies to different types of misses. In addition, we combine several cache optimizations to cover 89% of cache misses with a sixteen- entry buffer, called PV buffer. By using cache miss type information, we can reduce unnecessary memory traffic and fetch operations to increase effectiveness of this cache-assist buffer.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT009317593
http://hdl.handle.net/11536/78804
Appears in Collections:Thesis


Files in This Item:

  1. 759301.pdf
  2. 759302.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.