完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHuang, Yu-Minen_US
dc.contributor.authorTseng, Huan-Hsinen_US
dc.contributor.authorChien, Jen-Tzungen_US
dc.date.accessioned2020-10-05T02:01:29Z-
dc.date.available2020-10-05T02:01:29Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-7281-3248-8en_US
dc.identifier.issn2309-9402en_US
dc.identifier.urihttp://hdl.handle.net/11536/155267-
dc.description.abstractSpatial image and optical how provide complementary information for video representation and classification. Traditional methods separately encode two stream signals and then fuse them at the end of streams. This paper presents a new multi-stream recurrent neural network where streams are tightly coupled at each time step. Importantly, we propose a stochastic fusion mechanism for multiple streams of video data based on the Gumbel samples to increase the prediction power. A stochastic backpropagation algorithm is implemented to carry out a multi-stream neural network with stochastic fusion based on a joint optimization of convolutional encoder and recurrent decoder. Experiments on UCF101 dalaset illustrate the merits of the proposed stochastic fusion in recurrent neural network in terms of interpretation and classification performance.en_US
dc.language.isoen_USen_US
dc.titleStochastic Fusion for Multi-stream Neural Network in Video Classificationen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)en_US
dc.citation.spage69en_US
dc.citation.epage74en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000555696900013en_US
dc.citation.woscount0en_US
顯示於類別:會議論文