標題: | An Energy-Efficient Accelerator with Relative-Indexing Memory for Sparse Compressed Convolutional Neural Network |
作者: | Wu, I-Chen Huang, Po-Tsang Lo, Chin-Yang Hwang, Wei 電子工程學系及電子研究所 國際半導體學院 Department of Electronics Engineering and Institute of Electronics International College of Semiconductor Technology |
公開日期: | 1-一月-2019 |
摘要: | Deep convolutional neural networks (CNNs) are widely used in image recognition and feature classification. However, deep CNNs are hard to be fully deployed for edge devices due to both computation-intensive and memory-intensive workloads. The energy efficiency of CNNs is dominated by off-chip memory accesses and convolution computation. In this paper, an energy-efficient accelerator is proposed for sparse compressed CNNs by reducing DRAM accesses and eliminating zero-operand computation. Weight compression is utilized for sparse compressed CNNs to reduce the required memory capacity/bandwidth and a large portion of connections. Thus, ReLU function produces zero-valued activations. Additionally, the workloads are distributed based on channels to increase the degree of task parallelism, and all-row-to-all-row non-zero element multiplication is adopted for skipping redundant computation. The simulation results over the dense accelerator show that the proposed accelerator achieves 1.79x speedup and reduces 23.51%, 69.53%, 88.67% on-chip memory size, energy, and DRAM accesses of VGG-16. |
URI: | http://hdl.handle.net/11536/153276 |
ISBN: | 978-1-5386-7884-8 |
期刊: | 2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2019) |
起始頁: | 42 |
結束頁: | 45 |
顯示於類別: | 會議論文 |