完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Tsai, Richard Yi-Chia | en_US |
dc.contributor.author | Ke, Hans Ting-Yuan | en_US |
dc.contributor.author | Lin, Kate Ching-Ju | en_US |
dc.contributor.author | Tseng, Yu-Chee | en_US |
dc.date.accessioned | 2020-01-02T00:03:28Z | - |
dc.date.available | 2020-01-02T00:03:28Z | - |
dc.date.issued | 2019-01-01 | en_US |
dc.identifier.isbn | 978-1-5386-6026-3 | en_US |
dc.identifier.issn | 1050-4729 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/153331 | - |
dc.description.abstract | Person identification and tracking (PIT) is an essential issue in computer vision and robotic applications. It has long been studied and achieved by technologies such as RFID or face/fingerprint/iris recognition. These approaches, however, have their limitations due to environmental constraints (such as lighting and obstacles) or require close contact to specific devices. Therefore, their recognition accuracy highly depends on use scenarios. In this work, we propose RCU (Robot Catch yoU), an accompanyist robot system that provides follow-me or guide-me services. Such robots are capable of distinguishing users' profiles in front of them and keep tracking a specific target person. We study a more challenging scenario where the target person may be under occlusion from time to time. To enable robust PIT, we develop a data fusion technique that integrates two types of sensors, an RGB-D camera and wearable inertial sensors. Since the data generated by these sensors share common features, we are able to fuse them to achieve identity-aware tracking. Practical issues, such as time synchronization and coordinate calibration, are also addressed. We implement our design on a robotic platform and show that it can track a target person even when no biological feature is captured by the RGB-D camera. Our experimental evaluation shows a recognition rate of 95% and a following rate of 88%. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Computer Vision | en_US |
dc.subject | Data Fusion | en_US |
dc.subject | IoT | en_US |
dc.subject | Person Identification | en_US |
dc.subject | Tracking | en_US |
dc.subject | Robotics | en_US |
dc.subject | Wearable Computing | en_US |
dc.title | Enabling Identity-Aware Tracking via Fusion of Visual and Inertial Features | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) | en_US |
dc.citation.spage | 2260 | en_US |
dc.citation.epage | 2266 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000494942301108 | en_US |
dc.citation.woscount | 0 | en_US |
顯示於類別: | 會議論文 |