Title: Composition and retrieval of visual information for video databases
Authors: Cheng, PJ
Yang, WP
資訊工程學系
Department of Computer Science
Keywords: content-based retrieval;video data modeling;spatio-temporal composition;query by example and trajectory matching
Issue Date: 1-Dec-2001
Abstract: This paper presents a new visual aggregation model for representing visual information about moving objects in video data. Based on available automatic scene segmentation and object tracking algorithms, the proposed model provides eight operations to calculate object motions at various levels of semantic granularity. It represents trajectory, color and dimensions of a single moving object and the directional and topological relations among multiple objects over a time interval. Each representation of a motion can be normalized to improve computational cost and storage utilization. To facilitate query processing, there are two optimal approximate matching algorithms designed to match time-series visual features of moving objects. Experimental results indicate that the proposed algorithms outperform the conventional subsequence matching methods substantially in the similarity between the two trajectories. Finally, the visual aggregation model is integrated into a relational database system and a prototype content-based video retrieval system has been implemented as well. (C) 2001 Academic Press.
URI: http://dx.doi.org/10.1006/jvlc.2000.0222
http://hdl.handle.net/11536/29239
ISSN: 1045-926X
DOI: 10.1006/jvlc.2000.0222
Journal: JOURNAL OF VISUAL LANGUAGES AND COMPUTING
Volume: 12
Issue: 6
Begin Page: 627
End Page: 656
Appears in Collections:Articles


Files in This Item:

  1. 000172853800003.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.