Full metadata record
DC FieldValueLanguage
dc.contributor.authorChuang, Tzu-Kuanen_US
dc.contributor.authorLin, Ni-Chingen_US
dc.contributor.authorChen, Jih-Shien_US
dc.contributor.authorHung, Chen-Haoen_US
dc.contributor.authorHuang, Yi-Weien_US
dc.contributor.authorTeng, Chunchihen_US
dc.contributor.authorHuang, Haikunen_US
dc.contributor.authorYu, Lap-Faien_US
dc.contributor.authorGiarre, Lauraen_US
dc.contributor.authorWang, Hsueh-Chengen_US
dc.date.accessioned2019-04-02T06:04:20Z-
dc.date.available2019-04-02T06:04:20Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn1050-4729en_US
dc.identifier.urihttp://hdl.handle.net/11536/150770-
dc.description.abstractNavigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and real-world environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.en_US
dc.language.isoen_USen_US
dc.titleDeep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worldsen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)en_US
dc.citation.spage5849en_US
dc.citation.epage5855en_US
dc.contributor.department電機工程學系zh_TW
dc.contributor.departmentDepartment of Electrical and Computer Engineeringen_US
dc.identifier.wosnumberWOS:000446394504059en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper