完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChi, SAen_US
dc.contributor.authorShiu, RMen_US
dc.contributor.authorChiu, JCen_US
dc.contributor.authorChang, SEen_US
dc.contributor.authorChung, CPen_US
dc.date.accessioned2014-12-08T15:27:30Z-
dc.date.available2014-12-08T15:27:30Z-
dc.date.issued1997en_US
dc.identifier.isbn0-8186-8227-2en_US
dc.identifier.urihttp://hdl.handle.net/11536/19766-
dc.description.abstractInstruction cache prefetching is a technique to reduce the penalty caused by instruction cache misses. The prefetching methods generally determines the target fine to be prefetched based on the current fetched fine address. However, as the cache fine becomes wider, there may be multiple branches in a cache fine which hurdles the decision made by these methods. This paper develops a new instruction cache prefetching method in which the prefetch is directed by the prediction on branches. We call it the branch instruction based (BIB) prefetching, In BIB prefetching, the prefetch information is recorded in an extended BTB. Simulation results show that, the BIB prefetching outperforms the traditional sequential prefetching by 7% and other prediction table based prefetching methods by 17% on average, iis the BTB designs become more sophisticated and achieve higher hit and accuracy ratio the BIB prefetching can achieve higher performance.en_US
dc.language.isoen_USen_US
dc.subjectinstruction cache prefetchingen_US
dc.subjectbranch target bufferen_US
dc.subjectsequential prefetchingen_US
dc.subjectprediction table based prefetchingen_US
dc.titleInstruction cache prefetching with extended BTBen_US
dc.typeProceedings Paperen_US
dc.identifier.journal1997 INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, PROCEEDINGSen_US
dc.citation.spage360en_US
dc.citation.epage365en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000071190300052-
顯示於類別:會議論文