Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheng, Stone | en_US |
dc.contributor.author | Hsu, Charlie | en_US |
dc.date.accessioned | 2017-04-21T06:49:26Z | - |
dc.date.available | 2017-04-21T06:49:26Z | - |
dc.date.issued | 2015 | en_US |
dc.identifier.isbn | 978-1-4673-6704-2 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/135797 | - |
dc.description.abstract | This paper proposes a sequential framework to explore the possibility of human form robots motion rendering models to express emotions inspired by the mechanisms of real-time emotional locus of music signals. The music emotion system progressively extracts the features of music and characterizes music-induced emotions in an emotion plane to trace the real-time emotion locus of music. Five feature sets are extracted from the WAV file of music. Feature-weighted scoring algorithms continuously mark the trajectory on the emotion plane. The boundaries of four emotions are demarcated by Gaussian mixture model. A graphic interface represents the tracking of dynamic emotional locus. The music emotion locus and robot movement are integrated and analyzed by the modified Laban movement analysis. The robot controller organized with multi-modal whole-body awareness of music emotions gave rise to robot\'s autonomous locomotion. | en_US |
dc.language.iso | en_US | en_US |
dc.title | Development of Motion Rendering using Laban Movement Analysis to Humanoid Robots Inspired by Real-Time Emotional Locus of Music Signals | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) | en_US |
dc.citation.spage | 803 | en_US |
dc.citation.epage | 808 | en_US |
dc.contributor.department | 機械工程學系 | zh_TW |
dc.contributor.department | Department of Mechanical Engineering | en_US |
dc.identifier.wosnumber | WOS:000380393600133 | en_US |
dc.citation.woscount | 0 | en_US |
Appears in Collections: | Conferences Paper |