Full metadata record
DC FieldValueLanguage
dc.contributor.authorLin, Chia-Howen_US
dc.contributor.authorYang, Chia-Hsingen_US
dc.contributor.authorWang, Cheng-Kangen_US
dc.contributor.authorSong, Kai-Taien_US
dc.contributor.authorHu, Jwu-Shengen_US
dc.date.accessioned2014-12-08T15:03:12Z-
dc.date.available2014-12-08T15:03:12Z-
dc.date.issued2008en_US
dc.identifier.isbn978-1-4244-2212-8en_US
dc.identifier.urihttp://hdl.handle.net/11536/1763-
dc.description.abstractHuman detection and tracking is important for user-friendly human-robot interaction. The robot should be able to find the user autonomously and keep its attention to the user in a human-like manner. In this paper, a design and experimental study of robust human detection and tracking is presented through fusion several modalities of sensory information. The multi-modal interaction design utilizes a combination of visual, audio, and laser scanner data for reliable detection and tracking of an interested user. During tracking motion, obstacle avoidance behavior will be activated any time required to ensure safety. Furthermore, user can further assign the robot to interact with other user by speech command. Experimental results show that the robot can robustly tracks person under complex scenarios.en_US
dc.language.isoen_USen_US
dc.subjecthuman-robot interactionen_US
dc.subjectfocus attentionen_US
dc.subjectmulti-modal systemen_US
dc.subjectservice robotsen_US
dc.subjecthuman trackingen_US
dc.titleA New Design on Multi-Modal Robotic Focus Attentionen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2en_US
dc.citation.spage598en_US
dc.citation.epage603en_US
dc.contributor.department電控工程研究所zh_TW
dc.contributor.departmentInstitute of Electrical and Control Engineeringen_US
dc.identifier.wosnumberWOS:000261700900100-
Appears in Collections:Conferences Paper