完整後設資料紀錄
DC 欄位語言
dc.contributor.authorYang, Hsiao-Chienen_US
dc.contributor.authorChen, Po-Hengen_US
dc.contributor.authorChen, Kuan-Wenen_US
dc.contributor.authorLee, Chen-Yien_US
dc.contributor.authorChen, Yong-Shengen_US
dc.date.accessioned2020-10-05T01:59:42Z-
dc.date.available2020-10-05T01:59:42Z-
dc.date.issued2020-01-01en_US
dc.identifier.issn1057-7149en_US
dc.identifier.urihttp://dx.doi.org/10.1109/TIP.2020.2991883en_US
dc.identifier.urihttp://hdl.handle.net/11536/154831-
dc.description.abstractBoth structural and contextual information is essential and widely used in image analysis. However, current multi-view stereo (MVS) approaches usually use a single common pre-trained model as pixel descriptor to extract features, which mix structural and contextual information together and thus increase the difficulty of matching correspondence. In this paper, we propose FADE (feature aggregation for depth estimation), which treats spatial and context information separately and focuses on aggregating features for efficient learning of the MVS problem. Spatial information includes image details such as edges and corners, whereas context information comprises object features such as shapes and traits. To aggregate these multi-level features, we use an attention mechanism to select important features for matching. We then build a plane sweep volume by using a homography backward warping method to generate match candidates. Furthermore, we propose a novel cost volume regularization network aims to minimize the noise in the matching candidates. Finally, we take advantage of 3D stacked hourglass and regression to produces high-quality depth maps. With these well-aggregated features, FADE can efficiently perform dense depth reconstruction, achieving state-of-the-art performance in terms of accuracy and requiring the least amount of model parameters.en_US
dc.language.isoen_USen_US
dc.subjectThree-dimensional displaysen_US
dc.subjectFeature extractionen_US
dc.subjectEstimationen_US
dc.subjectImage reconstructionen_US
dc.subjectCamerasen_US
dc.subjectVisualizationen_US
dc.subjectComputational modelingen_US
dc.subjectMulti-view stereoen_US
dc.subjectdepth estimationen_US
dc.subjectfeature aggregationen_US
dc.subjectattention mechanismen_US
dc.subjecthomographyen_US
dc.subjectplane sweep algorithmen_US
dc.titleFADE: Feature Aggregation for Depth Estimation With Multi-View Stereoen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TIP.2020.2991883en_US
dc.identifier.journalIEEE TRANSACTIONS ON IMAGE PROCESSINGen_US
dc.citation.volume29en_US
dc.citation.spage6590en_US
dc.citation.epage6600en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000545739000001en_US
dc.citation.woscount0en_US
顯示於類別:期刊論文