Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, Hao-Weien_US
dc.contributor.authorKuo, Hsien-Kaien_US
dc.contributor.authorChen, Kuan-Tingen_US
dc.contributor.authorLai, Bo-Cheng Charlesen_US
dc.date.accessioned2014-12-08T15:35:17Z-
dc.date.available2014-12-08T15:35:17Z-
dc.date.issued2013en_US
dc.identifier.isbn978-1-4673-6238-2en_US
dc.identifier.issn2162-3562en_US
dc.identifier.urihttp://hdl.handle.net/11536/23920-
dc.description.abstractThe massive data demand of GPGPUs requires expensive memory modules, such as GDDR, to support high data bandwidth. The high cost poses constraints on the total memory capacity available to GPGPUs, and the data need to be transferred between the host CPUs and GPGPUs. However, the long latency of data transfers has resulted in significant performance overhead. To alleviate this issue, the modern GPGPUs have implemented the non-blocking data transfer allowing a GPGPU to perform computing while the data is being transmitted. This paper proposes a capacity aware scheduling algorithm that exploits the non-blocking data transfer in modern GPGPUs. By effectively taking the advantage of non-blocking transfers, experiment results demonstrate an average of 24.01% performance improvement when compared to existing approaches that only consider memory capacity.en_US
dc.language.isoen_USen_US
dc.subjectGPGPUen_US
dc.subjectMemory Optimizationen_US
dc.subjectNon-blocking data transferen_US
dc.titleMEMORY CAPACITY AWARE NON-BLOCKING DATA TRANSFER ON GPGPUen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2013 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS)en_US
dc.citation.spage395en_US
dc.citation.epage400en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000332832800069-
Appears in Collections:Conferences Paper