完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChou, Chien-Lien_US
dc.contributor.authorChen, Hua-Tsungen_US
dc.contributor.authorHsu, Chun-Chiehen_US
dc.contributor.authorLee, Suh-Yinen_US
dc.date.accessioned2017-04-21T06:48:15Z-
dc.date.available2017-04-21T06:48:15Z-
dc.date.issued2015en_US
dc.identifier.isbn978-1-4799-7079-7en_US
dc.identifier.issn2330-7927en_US
dc.identifier.urihttp://hdl.handle.net/11536/136028-
dc.description.abstractThe traditional video annotation approaches focus on annotating keyframes, shots, or the whole video with semantic keywords. However, the extractions of keyframes and shots lack of semantic meanings, and it is hard to use a few keywords to describe a video by using multiple topics. Therefore, we propose a novel video annotation framework using near-duplicate segment detection not only to preserve but also to purify the semantic meanings of target annotation units. A hierarchical near-duplicate segment detection method is proposed to efficiently localize near-duplicate segments in frame-level. Videos containing near-duplicate segments are clustered and keyword distributions of clusters are analyzed. Finally, the keywords ranked according to keyword distribution scores are annotated onto the obtained annotation units. Comprehensive experiments demonstrate the effectiveness of the proposed video annotation framework and near-duplicate segment detection method.en_US
dc.language.isoen_USen_US
dc.subjectvideo annotationen_US
dc.subjectautomatic annotationen_US
dc.subjectnear-duplicate segment detectionen_US
dc.subjectweb video analysisen_US
dc.titleA NOVEL VIDEO ANNOTATION FRAMEWORK USING NEAR-DUPLICATE SEGMENT DETECTIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000380531100114en_US
dc.citation.woscount0en_US
顯示於類別:會議論文