Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shih, Meng-Li | en_US |
dc.contributor.author | Chen, Yi-Chun | en_US |
dc.contributor.author | Tung, Chia-Yu | en_US |
dc.contributor.author | Sun, Cheng | en_US |
dc.contributor.author | Cheng, Ching-Ju | en_US |
dc.contributor.author | Chan, Liwei | en_US |
dc.contributor.author | Varadarajan, Srenivas | en_US |
dc.contributor.author | Sun, Min | en_US |
dc.date.accessioned | 2019-04-02T06:04:49Z | - |
dc.date.available | 2019-04-02T06:04:49Z | - |
dc.date.issued | 2018-01-01 | en_US |
dc.identifier.issn | 2153-0858 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/151050 | - |
dc.description.abstract | We develop a Deep Learning-based Wearable Vision-system with Vibrotactile-feedback (DLWV2) to guide Blind and Visually Impaired (BVI) people to reach objects. The system achieves high accuracy in object detection and tracking in 3-D using an extended deep learning-based 2.5D detector and a 3-D object tracker with the ability to track 3-D object locations even outside the camera field-of-view. We train our detector with a large number of images with 2.5D object ground-truth (i.e., 2-D object bounding boxes and distance from the camera to objects). A novel combination of HTC Vive Tracker with our system enables us to automatically obtain the ground-truth labels for training while requiring very little human effort to set up the system. Moreover, our system processes frames in real-time through a client-server computing platform such that BVI people can receive realtime vibrotactile guidance. We conduct a thorough user study on 12 BVI people in new environments with object instances which are unseen during training. Our system outperforms the non-assistive guiding strategy with statistic significance in both time and the number of contacting irrelevant objects. Finally, the interview with BVI users confirms that our system with distance-based vibrotactile feedback is mostly preferred, especially for objects requiring gentle manipulation such as a bottle with water inside. | en_US |
dc.language.iso | en_US | en_US |
dc.title | DLWV2: a Deep Learning-based Wearable Vision-system with Vibrotactile-feedback for Visually Impaired People to Reach Objects | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | en_US |
dc.citation.spage | 7904 | en_US |
dc.citation.epage | 7911 | en_US |
dc.contributor.department | 交大名義發表 | zh_TW |
dc.contributor.department | National Chiao Tung University | en_US |
dc.identifier.wosnumber | WOS:000458872707022 | en_US |
dc.citation.woscount | 0 | en_US |
Appears in Collections: | Conferences Paper |