完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Sun, Shih-Wei | en_US |
dc.contributor.author | Wang, Yu-Chiang Frank | en_US |
dc.contributor.author | Huang, Fay | en_US |
dc.contributor.author | Liao, Hong-Yuan Mark | en_US |
dc.date.accessioned | 2014-12-08T15:30:21Z | - |
dc.date.available | 2014-12-08T15:30:21Z | - |
dc.date.issued | 2013-04-01 | en_US |
dc.identifier.issn | 1047-3203 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1016/j.jvcir.2012.12.003 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/21702 | - |
dc.description.abstract | In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data. (c) 2012 Elsevier Inc. All rights reserved. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Template matching | en_US |
dc.subject | Object tracking | en_US |
dc.subject | Video object segmentation | en_US |
dc.subject | Foreground segmentation | en_US |
dc.subject | Background subtraction | en_US |
dc.title | Moving foreground object detection via robust SIFT trajectories | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.jvcir.2012.12.003 | en_US |
dc.identifier.journal | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION | en_US |
dc.citation.volume | 24 | en_US |
dc.citation.issue | 3 | en_US |
dc.citation.spage | 232 | en_US |
dc.citation.epage | 243 | en_US |
dc.contributor.department | 資訊科學與工程研究所 | zh_TW |
dc.contributor.department | Institute of Computer Science and Engineering | en_US |
dc.identifier.wosnumber | WOS:000317149200003 | - |
dc.citation.woscount | 2 | - |
顯示於類別: | 期刊論文 |