完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChiu, Hong Mingen_US
dc.contributor.authorLin, Kuan-Chihen_US
dc.contributor.authorChang, Tian Sheuanen_US
dc.date.accessioned2019-10-05T00:09:46Z-
dc.date.available2019-10-05T00:09:46Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-7281-0397-6en_US
dc.identifier.issn0271-4302en_US
dc.identifier.urihttp://hdl.handle.net/11536/152950-
dc.description.abstractModern convolutional neural network (CNN) models offer significant performance improvement over previous methods, but suffer from high computational complexity and are not able to adapt to different run-time needs. To solve above problem, this paper proposes an inference-stage pruning method that offers multiple operation points in a single model, which can provide computational power-accuracy modulation during run time. This method can perform on shallow CNN models as well as very deep networks such as Resnet101. Experimental results show that up to 50% savings in the FLOP are available by trading away less than 10% of the top-1 accuracy.en_US
dc.language.isoen_USen_US
dc.titleRun Time Adaptive Network Slimming for Mobile Environmentsen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000483076400006en_US
dc.citation.woscount0en_US
顯示於類別:會議論文