Full metadata record
DC FieldValueLanguage
dc.contributor.authorKao, Hsin-Weien_US
dc.contributor.authorKe, Ting-Yuanen_US
dc.contributor.authorLin, Kate Ching-Juen_US
dc.contributor.authorTseng, Yu-Gheeen_US
dc.date.accessioned2020-01-02T00:03:29Z-
dc.date.available2020-01-02T00:03:29Z-
dc.date.issued2019-01-01en_US
dc.identifier.isbn978-1-5386-6026-3en_US
dc.identifier.issn1050-4729en_US
dc.identifier.urihttp://hdl.handle.net/11536/153339-
dc.description.abstractAdvanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.en_US
dc.language.isoen_USen_US
dc.titleWho Takes What: Using RGB-D Camera and Inertial Sensor for Unmanned Monitoren_US
dc.typeProceedings Paperen_US
dc.identifier.journal2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)en_US
dc.citation.spage8063en_US
dc.citation.epage8069en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000494942305130en_US
dc.citation.woscount0en_US
Appears in Collections:Conferences Paper