完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChang, Jia-Renen_US
dc.contributor.authorChen, Yong-Shengen_US
dc.date.accessioned2019-04-02T06:04:35Z-
dc.date.available2019-04-02T06:04:35Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn1063-6919en_US
dc.identifier.urihttp://dx.doi.org/10.1109/CVPR.2018.00567en_US
dc.identifier.urihttp://hdl.handle.net/11536/151019-
dc.description.abstractRecent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https://github.com/JiaRenChang/PSMNet.en_US
dc.language.isoen_USen_US
dc.titlePyramid Stereo Matching Networken_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1109/CVPR.2018.00567en_US
dc.identifier.journal2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)en_US
dc.citation.spage5410en_US
dc.citation.epage5418en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000457843605058en_US
dc.citation.woscount4en_US
顯示於類別:會議論文