標題: | Multimodal Video-to-Near-Scene Annotation |
作者: | Chou, Chien-Li Chen, Hua-Tsung Lee, Suh-Yin 交大名義發表 資訊工程學系 National Chiao Tung University Department of Computer Science |
關鍵字: | Near-duplicate segment alignment;near-duplicate video retrieval;near-scene detection;near-scene annotation;video annotation |
公開日期: | Feb-2017 |
摘要: | Traditional video annotation approaches focus on annotating keyframes/shots or whole videos with semantic keywords. However, the extraction processes of keyframes/shots might lack semantic meanings, and it is hard to use a few keywords to describe the content of a long video with multiple topics. In this work, near- scenes, which contain similar concepts, topics, or semantic meanings, are designed for better video content understanding and annotation. We propose a novel framework of hierarchical video-to-near-scene annotation not only to preserve but also to purify the semantic meanings of near- scenes. To detect near-scenes, a pattern- based prefix tree is first constructed to fast retrieve near- duplicate videos. Then, the videos containing similar near- duplicate segments and similar keywords are clustered with consideration of multimodal features including visual and textual features. To enhance the precision of near-scene detection, a pattern-to-intensity- mark (PIM) method is proposed to perform precise frame- level near- duplicate segment alignment. For each near- scene, a video-to-concept distribution model is designed to analyze the representativeness of keywords and discriminations of clusters by the proposed potential term frequency and inverse document frequency and entropy. Tags are ranked according to video-to-concept distribution scores, and the tags with the highest scores are propagated to near-scenes detected. Extensive experiments demonstrate that the proposed PIM outperforms state-of-the-art approaches compared in terms of quality segments and quality frames for near-scene detection. Furthermore, the proposed framework of hierarchical video- to-near- scene annotation can achieve high quality of near-scene annotation in terms of mean average precision. |
URI: | http://dx.doi.org/10.1109/TMM.2016.2614426 http://hdl.handle.net/11536/133182 |
ISSN: | 1520-9210 |
DOI: | 10.1109/TMM.2016.2614426 |
期刊: | IEEE TRANSACTIONS ON MULTIMEDIA |
Volume: | 19 |
Issue: | 2 |
起始頁: | 354 |
結束頁: | 366 |
Appears in Collections: | Articles |