Full metadata record
DC FieldValueLanguage
dc.contributor.authorSyu, Jia-Haoen_US
dc.contributor.authorCho, Shih-Hsuanen_US
dc.contributor.authorWang, Sheng-Jyhen_US
dc.contributor.authorWang, Li-Chunen_US
dc.date.accessioned2019-08-02T02:24:19Z-
dc.date.available2019-08-02T02:24:19Z-
dc.date.issued2018-01-01en_US
dc.identifier.isbn978-3-319-94211-7; 978-3-319-94210-0en_US
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://dx.doi.org/10.1007/978-3-319-94211-7_28en_US
dc.identifier.urihttp://hdl.handle.net/11536/152463-
dc.description.abstractIn this paper, we propose an iterative contraction and merging framework (ICM) for semantic segmentation in indoor scenes. Given an input image and a raw depth image, we first derive the dense prediction map from a convolutional neural network (CNN) and a normal vector map from the depth image. By combining the RGB-D image with these two maps, we can guide the ICM process to produce a more accurate hierarchical segmentation tree in a bottom-up manner. After that, based on the hierarchical segmentation tree, we design a decision process which uses the dense prediction map as a reference to make the final decision of semantic segmentation. Experimental results show that the proposed method can generate much more accurate object boundaries if compared to the state-of-the-art methods.en_US
dc.language.isoen_USen_US
dc.subjectConvolutional neural networken_US
dc.subjectIterative contraction and mergingen_US
dc.subjectRGB-D imageen_US
dc.subjectSemantic segmentationen_US
dc.titleSemantic Segmentation of Indoor-Scene RGB-D Images Based on Iterative Contraction and Mergingen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1007/978-3-319-94211-7_28en_US
dc.identifier.journalIMAGE AND SIGNAL PROCESSING (ICISP 2018)en_US
dc.citation.volume10884en_US
dc.citation.spage252en_US
dc.citation.epage261en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.department電信工程研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.contributor.departmentInstitute of Communications Engineeringen_US
dc.identifier.wosnumberWOS:000469336800028en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper