標題: Composition and retrieval of visual information for video databases
作者: Cheng, PJ
Yang, WP
資訊工程學系
Department of Computer Science
關鍵字: content-based retrieval;video data modeling;spatio-temporal composition;query by example and trajectory matching
公開日期: 1-Dec-2001
摘要: This paper presents a new visual aggregation model for representing visual information about moving objects in video data. Based on available automatic scene segmentation and object tracking algorithms, the proposed model provides eight operations to calculate object motions at various levels of semantic granularity. It represents trajectory, color and dimensions of a single moving object and the directional and topological relations among multiple objects over a time interval. Each representation of a motion can be normalized to improve computational cost and storage utilization. To facilitate query processing, there are two optimal approximate matching algorithms designed to match time-series visual features of moving objects. Experimental results indicate that the proposed algorithms outperform the conventional subsequence matching methods substantially in the similarity between the two trajectories. Finally, the visual aggregation model is integrated into a relational database system and a prototype content-based video retrieval system has been implemented as well. (C) 2001 Academic Press.
URI: http://dx.doi.org/10.1006/jvlc.2000.0222
http://hdl.handle.net/11536/29239
ISSN: 1045-926X
DOI: 10.1006/jvlc.2000.0222
期刊: JOURNAL OF VISUAL LANGUAGES AND COMPUTING
Volume: 12
Issue: 6
起始頁: 627
結束頁: 656
Appears in Collections:Articles


Files in This Item:

  1. 000172853800003.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.