完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Liu, Chien-Liang | en_US |
dc.contributor.author | Hsaio, Wen-Hoar | en_US |
dc.contributor.author | Xiao, Bin | en_US |
dc.contributor.author | Chen, Chun-Yu | en_US |
dc.contributor.author | Wu, Wei-Liang | en_US |
dc.date.accessioned | 2018-08-21T05:53:42Z | - |
dc.date.available | 2018-08-21T05:53:42Z | - |
dc.date.issued | 2017-05-17 | en_US |
dc.identifier.issn | 0925-2312 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1016/j.neucom.2017.01.071 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/145042 | - |
dc.description.abstract | This work devises a maximum-margin sparse coding algorithm, jointly considering reconstruction loss and hinge loss in the model. The sparse representation along with maximum-margin constraint is analogous to kernel trick and maximum-margin properties of support vector machine (SVM), giving a base for the proposed algorithm to perform well in classification tasks. The key idea behind the proposed method is to use labeled and unlabeled data to learn discriminative representations and model parameters simultaneously, making it easier to classify data in the new space. We propose to use block coordinate descent to learn all the components of the proposed model and give detailed derivation for the update rules of the model variables. Theoretical analysis on the convergence of the proposed MMSC algorithm is provided based on Zangwill's global convergence theorem. Additionally, most previous research studies on dictionary learning suggest to use an overcomplete dictionary to improve classification performance, but it is computationally intensive when the dimension of the input data is huge. We conduct experiments on several real data sets, including Extended YaleB, AR face, and Caltech101 data sets. The experimental results indicate that the proposed algorithm outperforms other comparison algorithms without an over-complete dictionary, providing flexibility to deal with high-dimensional data sets. (C)2017 Elsevier B.V. All rights reserved. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Maximum-margin | en_US |
dc.subject | Sparse coding | en_US |
dc.subject | Block coordinate descent | en_US |
dc.title | Maximum-margin sparse coding | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.neucom.2017.01.071 | en_US |
dc.identifier.journal | NEUROCOMPUTING | en_US |
dc.citation.volume | 238 | en_US |
dc.citation.spage | 340 | en_US |
dc.citation.epage | 350 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | 工業工程與管理學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.contributor.department | Department of Industrial Engineering and Management | en_US |
dc.identifier.wosnumber | WOS:000397372100039 | en_US |
顯示於類別: | 期刊論文 |