完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 邱冠霖 | zh_TW |
dc.contributor.author | 張添烜 | zh_TW |
dc.contributor.author | Chiu,Kuan-Lin | en_US |
dc.date.accessioned | 2018-01-24T07:42:38Z | - |
dc.date.available | 2018-01-24T07:42:38Z | - |
dc.date.issued | 2017 | en_US |
dc.identifier.uri | http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070450230 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/142749 | - |
dc.description.abstract | 近年以來,卷積類神經網路(CNN)在機器學習相關領域中相當流行, 尤其是在電腦視覺的分支,CNN 的成果相當優異。然而,目前的演算法由 於計算複雜度高,因此需要仰賴強力的 GPU 來計算 CNN 模型。 許多論文著力於藉由量化權重(weights)或者激發函數(activation functions)來降低運算量,但是這樣的作法會對準確度造成負面的影響。因 此本論文提出逐步化簡的流程,在訓練的過程中將 weights 從浮點數化簡 至三元化(Ternary),並使 activation functions 從浮點數量化至定點數,並在 適當的時間點將 Batch normalization 也一併化簡。經過這套流程後,在 ResNet-56 與 DenseNet-40 的準確率下降的程度分別為 1.61%與 3.9%。 另一方面,本論文也提出一個能與之相配合的硬體。使用稀疏矩陣讀 取方法(sparse matrix loading)與分群排序與合併(grouped-sort and merge)的方 法,僅將非零的值輸入到我們的加速器裡,並善用被化簡至三元化的 weights,將卷積運算中的乘法代換成多工器(multiplexer)以及移位運算子 (shift operator)。在資料重用(data reuse)方面,本論文對卷積運算方面提出 輸入視點卷積運算(input view convolution),藉此降低輸入間的關聯性,且 在不同的 output feature map 數量下,透過動態的處理機協同(PE cooperation)以最少的輸入作最高平行度的運算。 最後在 TSMC 40nm 的製程下合成一個約 3.28M 邏輯閘數目的設計, 以 500MHz 的操作頻率下,ResNet-56 搭配 CIFAR10 以及 ResNet-34 搭 配 ImageNet 約分別可達到 1684FPS 以及 80FPS 的結果。 | zh_TW |
dc.description.abstract | Convolutional neural networks(CNNs) blow up in the last few years. The performance is impressive especially in the computer vision field. However, the computation complexity of state-of-art models is very high. As the result, powerful GPU is needed to compute CNN models. Several works try to reduce the computation by quantizing weights and activations. But quantizing models directly may have negative effect on accuracy. Thus, this thesis propose a systematic method named progressive quantization to simplified models when training. We could simplify weights from floating point to ternary values and quantize activation from floating point to fixed point values when training models. Besides, we also simplify batch normalization at proper time. Training models through our method, the accuracy drops in ResNet-56 and DenseNet-40 are 1.61% and 3.9% respectively in our experiment. On the other side, this thesis also propose a compatible hardware. We import only non-zero values to our accelerator by sparse matrix loading and group-sort and merge method. In addition, we make good use of the ternary weights to replace multipliers to multiplexers and shift operators. As for data reuse, this thesis propose input view convolution to reduce the dependency between convolution inputs and propose PE cooperation to calculation in high level parallel with few inputs in different output feature maps. At last, an implementation synthetized with TSMC 40nm process consumes 3.28M gate counts. ResNet-56 with CIFAR10 and ResNet-34 with ImageNet could arrive 1684FPS and 80FPS respectively with the implementation under 500MHz clock frequency. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | 卷積類神經網路 | zh_TW |
dc.subject | 稀疏計算 | zh_TW |
dc.subject | 三元計算 | zh_TW |
dc.subject | 量化計算 | zh_TW |
dc.subject | 類神經網路硬體 | zh_TW |
dc.subject | convolutional neural network | en_US |
dc.subject | sparse calculation | en_US |
dc.subject | ternary calculation | en_US |
dc.subject | quantization calculation | en_US |
dc.subject | neural network hardware | en_US |
dc.title | 稀疏三元卷積類神經網路模型及其硬體設計 | zh_TW |
dc.title | Sparse Ternary Convolutional Neural Network Model and its Hardware Design | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 電子研究所 | zh_TW |
顯示於類別: | 畢業論文 |