完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Ardianto, Sandy | en_US |
dc.contributor.author | Hang, Hsueh-Ming | en_US |
dc.date.accessioned | 2019-08-02T02:24:16Z | - |
dc.date.available | 2019-08-02T02:24:16Z | - |
dc.date.issued | 2018-01-01 | en_US |
dc.identifier.isbn | 978-9-8814-7685-2 | en_US |
dc.identifier.issn | 2309-9402 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/152429 | - |
dc.description.abstract | In this paper, we study multi-modal and multi-view action recognition system based on the deep-learning techniques. We extended the Temporal Segment Network with additional data fusion stage to combine information from different sources. In this research, we use multiple types of information from different modality such as RGB, depth, infrared data to detect predefined human actions. We tested various combinations of these data sources to examine their impact on the final detection accuracy. We designed 3 information fusion methods to generate the final decision. The most interested one is the Learned Fusion Net designed by us. It turns out the Learned Fusion structure has the best results but requires more training. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | human action recognition | en_US |
dc.subject | neural nets | en_US |
dc.subject | deep learning | en_US |
dc.subject | multi-view video | en_US |
dc.subject | multi-modal video | en_US |
dc.subject | information fusion | en_US |
dc.title | Multi-View and Multi-Modal Action Recognition with Learned Fusion | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC) | en_US |
dc.citation.spage | 1601 | en_US |
dc.citation.epage | 1604 | en_US |
dc.contributor.department | 電機學院 | zh_TW |
dc.contributor.department | College of Electrical and Computer Engineering | en_US |
dc.identifier.wosnumber | WOS:000468383400259 | en_US |
dc.citation.woscount | 0 | en_US |
顯示於類別: | 會議論文 |