標題: VOSTR: Video Object Segmentation via Transferable Representations
作者: Chen, Yi-Wen
Tsai, Yi-Hsuan
Lin, Yen-Yu
Yang, Ming-Hsuan
交大名義發表
National Chiao Tung University
關鍵字: Video object segmentation;Transfer learning;Weakly-supervised learning
公開日期: 1-一月-1970
摘要: In order to learn video object segmentation models, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into three tasks: (1) refining the responses with fully-connected CRFs, (2) solving a submodular function for selecting object-like segments, and (3) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between three tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art algorithms.
URI: http://dx.doi.org/10.1007/s11263-019-01224-x
http://hdl.handle.net/11536/153936
ISSN: 0920-5691
DOI: 10.1007/s11263-019-01224-x
期刊: INTERNATIONAL JOURNAL OF COMPUTER VISION
起始頁: 0
結束頁: 0
顯示於類別:期刊論文