完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLin, Chien-Yuen_US
dc.contributor.authorLai, Bo-Chengen_US
dc.date.accessioned2018-08-21T05:57:09Z-
dc.date.available2018-08-21T05:57:09Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn2153-6961en_US
dc.identifier.urihttp://hdl.handle.net/11536/147112-
dc.description.abstractSparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).en_US
dc.language.isoen_USen_US
dc.titleSupporting Compressed-Sparse Activations and Weights on SIMD-like Accelerator for Sparse Convolutional Neural Networksen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 23RD ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC)en_US
dc.citation.spage105en_US
dc.citation.epage110en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000426987100017en_US
顯示於類別:會議論文