完整後設資料紀錄
DC 欄位語言
dc.contributor.authorAntioquia, Arren Matthew C.en_US
dc.contributor.authorTan, Daniel Stanleyen_US
dc.contributor.authorAzcarraga, Arnulfoen_US
dc.contributor.authorCheng, Wen-Huangen_US
dc.contributor.authorHua, Kai-Lungen_US
dc.date.accessioned2019-12-13T01:12:51Z-
dc.date.available2019-12-13T01:12:51Z-
dc.date.issued2018-01-01en_US
dc.identifier.isbn978-1-5386-4458-4en_US
dc.identifier.urihttp://hdl.handle.net/11536/153287-
dc.description.abstractWith the introduction of Convolutional Neural Networks, models for image classification achieve higher classification accuracy. Based on the pattern of the design of CNN architectures, increasing the number of layers equates to a higher classification accuracy, but also increases the number of parameters and model size. This negatively affects the model training time, processing time, and memory requirement. We develop ZipNet, a CNN architecture with a higher classification accuracy than ZFNet, the winner of ILSVRC 2013, but with 48.5x smaller model size and 48.7x fewer parameters. The classification accuracy of ZipNet is higher than the performance of ZFNet and SqueezeNet on all configurations of the Caltech-256 dataset with varying number of training examples.en_US
dc.language.isoen_USen_US
dc.subjectConvolutional Neural Networksen_US
dc.subjectModel Compressionen_US
dc.subjectImage Classificationen_US
dc.subjectObject Classificationen_US
dc.subjectDeep Learningen_US
dc.titleZipNet: ZFNet-level Accuracy with 48 x Fewer Parametersen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP)en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000493725000061en_US
dc.citation.woscount0en_US
顯示於類別:會議論文