Full metadata record
DC FieldValueLanguage
dc.contributor.author包苡廷en_US
dc.contributor.authorBao, Yi-Tingen_US
dc.contributor.author簡仁宗en_US
dc.contributor.authorChien, Jen-Tzungen_US
dc.date.accessioned2015-11-26T01:02:13Z-
dc.date.available2015-11-26T01:02:13Z-
dc.date.issued2015en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070260246en_US
dc.identifier.urihttp://hdl.handle.net/11536/127260-
dc.description.abstract在機器學習與訊號處理的領域中,人們對於實用資訊系統中到處可見的多通路資料(multi-way or multi-channel data)的分析與研究興趣與日俱增,使得張量分解(tensor factorization)與分類成為重要的研究議題。在傳統的類神經網路(neural network)中,分類器是利用一組輸入向量(input vector)進行模型訓練,使用訓練出來的模型進行測試資料的預測與判斷。過去多通道的資料常被展開成高維度(high-dimensional)的向量來對模型進行訓練。但不同通路上相鄰(neighboring)時間(temporal)與空間(spatial)的關聯資訊也遺失在分類器的訓練過程中,使得分類的效能受到了限制,另外也需要更大量的參數模型來建構並表示複雜的資料型態與特性。本篇論文發展全新的張量式類神經網路分類器(tensor classification network)透過張量分解(tensor factorization)與類神經網路分類的結合,它可以萃取出多通道的特徵(multi-way feature)並進行分類。張量式類神經網路有效整合塔克拆解(Tucker decomposition)與傳統單通路(one-way)類神經網路分類器,傳統類神經網路的仿射變換(affine transformations)因此被張量轉換(tensor transformation)所取代。我們延伸傳統以向量為主之單通路類神經網路成為具一般化特性之張量式多通路類神經網路,透過張量空間中輸入張量(input tensor)到潛在張量(latent tensor)的映射,讓多通道的時空資訊能精簡的保留在訓練出來的張量參數中,並且發展張量式倒傳遞演算法來有效率的建立張量式類神經網路。具高效能的張量式映射使得我們可以取得一個非常簡潔的分類器,同時訓練計算所花的時間也較傳統類神經網路更為快速。在本篇研究中,對於張量式類神經網路與向量式類神經網路及張量拆解的比較也進行了探討。在影像辨識評估的實驗中發現,相對於向量式類神經網路,張量式類神經網路可以達到同等甚至是更好的正確率但卻只需要極少的參數量,同時,訓練的計算成本也降低很多。zh_TW
dc.description.abstractThe growing interests in multi-way or multi-channel data analysis have made the tensor factorization and classification a crucial issue in the areas of signal processing and machine learning. Conventionally, the neural network (NN) classifier is estimated from a set of input vectors or one-way observations. The multi-way observations are unfolded as the high-dimensional vectors for model training. As a result, the classification performance is constrained because the correlation or neighboring information in temporal or spatial domains among different ways is lost in the trained NN classifier. More parameters are required to learn the complicated data structure from multiple ways, trials or channels. This study presents a new tensor classification network (TCN) which combines tensor factorization and NN classification for multi-way feature extraction and classification. The proposed TCN can be viewed as a generalization of NN classifier for multi-way data classification where Tucker decomposition and nonlinear operation are performed in each hidden unit. Using this approach, the affine transformation in conventional NN is replaced by the tensor transformation. We generalize from vector-based NN classifier to tensor-based TCN where the multi-way information in temporal, spatial or other domains is preserved through projecting the input tensors into latent tensors. The projection over tensor spaces is efficiently characterized so that a very compact classifier could be achieved. The proposed TCN does not only construct a compact model but also reduce the computation time in comparison with the traditional NN classifier. The tensor error backpropagation algorithm is developed to efficiently establish a tensor neural network. Experimental results on image recognition over different datasets demonstrate that TCN could attain comparable or even better classification performance but with very few parameters and the reduced computation cost when compared with the traditional NN classifier.en_US
dc.language.isoen_USen_US
dc.subject多通路資料zh_TW
dc.subject張量分解zh_TW
dc.subject類神經網路zh_TW
dc.subject深層學習zh_TW
dc.subject模式識別zh_TW
dc.subject影像辨識zh_TW
dc.subjectmulti-way dataen_US
dc.subjecttensor factorizationen_US
dc.subjectneural networken_US
dc.subjectdeep learningen_US
dc.subjectpattern classificationen_US
dc.subjectimage recognitionen_US
dc.title張量式類神經網路應用於多通路資料分類zh_TW
dc.titleTensor Neural Networks for Multi-way Data Classificationen_US
dc.typeThesisen_US
dc.contributor.department電信工程研究所zh_TW
Appears in Collections:Thesis