完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHuang, Yu-Haoen_US
dc.contributor.authorTseng, Ying-Yuen_US
dc.contributor.authorKuo, Hsien-Kaien_US
dc.contributor.authorYen, Ta-Kanen_US
dc.contributor.authorLai, Bo-Cheng Charlesen_US
dc.date.accessioned2015-12-02T03:00:59Z-
dc.date.available2015-12-02T03:00:59Z-
dc.date.issued2013-01-01en_US
dc.identifier.isbn978-1-4799-2418-9en_US
dc.identifier.issnen_US
dc.identifier.urihttp://dx.doi.org/10.1109/PDCAT.2013.46en_US
dc.identifier.urihttp://hdl.handle.net/11536/128640-
dc.description.abstractModern GPGPUs implement on-chip shared cache to better exploit the data reuse of various general purpose applications. Given the massive amount of concurrent threads in a GPGPU, striking the balance between Data Locality and Load Balance has become a critical design concern. To achieve the best performance, the trade-off between these two factors needs to be performed concurrently. This paper proposes a dynamic thread scheduler which co-optimizes both the data locality and load balance on a GPGPU. The proposed approach is evaluated using three applications with various input datasets. The results show that the proposed approach reduces the overall execution cycles by up to 16% when compared with other approaches concerning only one objective.en_US
dc.language.isoen_USen_US
dc.titleA Locality-Aware Dynamic Thread Scheduler for GPGPUsen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1109/PDCAT.2013.46en_US
dc.identifier.journal2013 INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS AND TECHNOLOGIES (PDCAT)en_US
dc.citation.spage254en_US
dc.citation.epage258en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000361018500040en_US
dc.citation.woscount0en_US
顯示於類別:會議論文