標題: Enhancing Utilization of SIMD-Like Accelerator for Sparse Convolutional Neural Networks
作者: Lai, Bo-Cheng
Pan, Jyun-Wei
Lin, Chien-Yu
電子工程學系及電子研究所
Department of Electronics Engineering and Institute of Electronics
關鍵字: Load balance;machine learning;single-instruction-multiple-data (SIMD) architecture;sparse convolutional neural networks (CNNs)
公開日期: 1-五月-2019
摘要: Although the existing single-instruction-multiple-data-like (SIMD) accelerators can handle the compressed format of sparse convolutional neural networks, the sparse and irregular distributions of nonzero elements cause low utilization of multipliers in a processing engine (PE) and imbalanced computation between PEs. This brief addresses the above issues by proposing a data screening and task mapping (DSTM) accelerator which integrates a series of techniques, including software refinement and hardware modules. An efficient indexing module is introduced to identify the effectual computation pairs and skip unnecessary computation in a fine-grained manner. The intra-PE load imbalance is alleviated with weight data rearrangement. An effective task sharing mechanism further balances the computation between PEs. When compared with the state-of-the-art SIMD-like accelerator, the proposed DSTM enhances the average PE utilization by 3.5x. The overall processing throughput is 59.7% higher than the previous design.
URI: http://dx.doi.org/10.1109/TVLSI.2019.2897052
http://hdl.handle.net/11536/152414
ISSN: 1063-8210
DOI: 10.1109/TVLSI.2019.2897052
期刊: IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS
Volume: 27
Issue: 5
起始頁: 1218
結束頁: 1222
顯示於類別:期刊論文