Full metadata record
DC FieldValueLanguage
dc.contributor.authorLai, Hsueh-Yingen_US
dc.contributor.authorTsai, Yi-Hsuanen_US
dc.contributor.authorChiu, Wei-Chenen_US
dc.date.accessioned2020-10-05T02:00:29Z-
dc.date.available2020-10-05T02:00:29Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-7281-3293-8en_US
dc.identifier.issn1063-6919en_US
dc.identifier.urihttp://dx.doi.org/10.1109/CVPR.2019.00199en_US
dc.identifier.urihttp://hdl.handle.net/11536/155012-
dc.description.abstractStereo matching and flow estimation are two essential tasks for scene understanding, spatially in 3D and temporally in motion. Existing approaches have been focused on the unsupervised setting due to the limited resource to obtain the large-scale ground truth data. To construct a self-learnable objective, co-related tasks are often linked together to form a joint framework. However, the prior work usually utilizes independent networks for each task, thus not allowing to learn shared feature representations across models. In this paper, we propose a single and principled network to jointly learn spatiotemporal correspondence for stereo matching and flow estimation, with a newly designed geometric connection as the unsupervised signal for temporally adjacent stereo pairs. We show that our method performs favorably against several state-of-the-art baselines for both unsupervised depth and flow estimation on the KITTI benchmark dataset.en_US
dc.language.isoen_USen_US
dc.titleBridging Stereo Matching and Optical Flow via Spatiotemporal Correspondenceen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1109/CVPR.2019.00199en_US
dc.identifier.journal2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)en_US
dc.citation.spage1890en_US
dc.citation.epage1899en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000529484002006en_US
dc.citation.woscount3en_US
Appears in Collections:Conferences Paper