Full metadata record
DC FieldValueLanguage
dc.contributor.authorLu, Chin-Fuen_US
dc.contributor.authorKuo, Hsien-Kaien_US
dc.contributor.authorLai, Bo-Cheng Charlesen_US
dc.date.accessioned2017-04-21T06:48:47Z-
dc.date.available2017-04-21T06:48:47Z-
dc.date.issued2016en_US
dc.identifier.isbn978-1-5090-0987-9en_US
dc.identifier.urihttp://dx.doi.org/10.1109/CISIS.2016.132en_US
dc.identifier.urihttp://hdl.handle.net/11536/134661-
dc.description.abstractGPGPUs have been widely adopted as throughput processing platforms for modern big-data and cloud computing. Attaining a high performance design on a GPGPU requires careful tradeoffs among various design concerns. Data reuse, cache contention, and thread level parallelism, have been demonstrated as three imperative performance factors for a GPGPU. The correlated performance impacts of these factors pose non-trivial concerns when scheduling threads on GPGPUs. This paper proposes a three-staged scheduling scheme to co-schedule the threads with consideration of the three factors. The experiment results on a set of irregular parallel applications, when compared with previous approaches, have demonstrated up to 70% execution time improvement.en_US
dc.language.isoen_USen_US
dc.subjectGPGPUen_US
dc.subjectcacheen_US
dc.subjectthread schedulingen_US
dc.subjectperformanceen_US
dc.titleEnhancing Data Reuse in Cache Contention Aware Thread Scheduling on GPGPUen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1109/CISIS.2016.132en_US
dc.identifier.journalPROCEEDINGS OF 2016 10TH INTERNATIONAL CONFERENCE ON COMPLEX, INTELLIGENT, AND SOFTWARE INTENSIVE SYSTEMS (CISIS)en_US
dc.citation.spage351en_US
dc.citation.epage356en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000391528700053en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper