Full metadata record
DC FieldValueLanguage
dc.contributor.authorHung, Shang-Weien_US
dc.contributor.authorLo, Shao-Yuanen_US
dc.contributor.authorHang, Hsueh-Mingen_US
dc.date.accessioned2020-05-05T00:01:59Z-
dc.date.available2020-05-05T00:01:59Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-5386-6249-6en_US
dc.identifier.issn1522-4880en_US
dc.identifier.urihttp://hdl.handle.net/11536/154046-
dc.description.abstractSemantic segmentation has made encouraging progress due to the success of deep convolutional networks in recent years. Meanwhile, depth sensors become prevalent nowadays; thus, depth maps can be acquired more easily. However, there are few studies that focus on the RGB-D semantic segmentation task. Exploiting the depth information effectiveness to improve performance is a challenge. In this paper, we propose a novel solution named LDFNet, which incorporates Luminance, Depth and Color information by a fusion-based network. It includes a sub-network to process depth maps and employs luminance images to assist the depth information in processes. LDFNet outperforms the other state-of-art systems on the Cityscapes dataset, and its inference speed is faster than most of the existing networks. The experimental results show the effectiveness of the proposed multi-modal fusion network and its potential for practical applications.en_US
dc.language.isoen_USen_US
dc.subjectRGB-D semantic segmentationen_US
dc.subjectdepth mapen_US
dc.subjectilluminanceen_US
dc.subjectfusion-based networken_US
dc.titleINCORPORATING LUMINANCE, DEPTH AND COLOR INFORMATION BY A FUSION-BASED NETWORK FOR SEMANTIC SEGMENTATIONen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)en_US
dc.citation.spage2374en_US
dc.citation.epage2378en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000521828602098en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper