完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLiao, Yi-Lunen_US
dc.contributor.authorYang, Yao-Chengen_US
dc.contributor.authorLin, Yuan-Fangen_US
dc.contributor.authorChen, Pin-Jungen_US
dc.contributor.authorKuo, Chia-Wenen_US
dc.contributor.authorChiu, Wei-Chenen_US
dc.contributor.authorWang, Yu-Chiang Franken_US
dc.date.accessioned2019-10-05T00:09:44Z-
dc.date.available2019-10-05T00:09:44Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-4799-8131-1en_US
dc.identifier.issn1520-6149en_US
dc.identifier.urihttp://hdl.handle.net/11536/152928-
dc.description.abstract3D reconstruction, inferring 3D shape information from a single 2D image, has drawn attention from learning and vision communities. In this paper, we propose a framework for learning pose-aware 3D shape reconstruction. Our proposed model learns deep representation for recovering the 3D object, with the ability to extract camera pose information but without any direct supervision of ground truth camera pose. This is realized by exploitation of 2D-3D self-consistency between 2D masks and 3D voxels. Experiments qualitatively and quantitatively demonstrate the effectiveness and robustness of our model, which performs favorably against state-of-the-art methods.en_US
dc.language.isoen_USen_US
dc.subjectdeep learningen_US
dc.subject3D shape reconstructionen_US
dc.subjectcamera pose estimationen_US
dc.subjectperspective projectionen_US
dc.titleLEARNING POSE-AWARE 3D RECONSTRUCTION VIA 2D-3D SELF-CONSISTENCYen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)en_US
dc.citation.spage3857en_US
dc.citation.epage3861en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000482554004019en_US
dc.citation.woscount0en_US
顯示於類別:會議論文