完整後設資料紀錄
DC 欄位語言
dc.contributor.authorCheng, Stoneen_US
dc.contributor.authorHsu, Charlieen_US
dc.date.accessioned2017-04-21T06:49:26Z-
dc.date.available2017-04-21T06:49:26Z-
dc.date.issued2015en_US
dc.identifier.isbn978-1-4673-6704-2en_US
dc.identifier.urihttp://hdl.handle.net/11536/135797-
dc.description.abstractThis paper proposes a sequential framework to explore the possibility of human form robots motion rendering models to express emotions inspired by the mechanisms of real-time emotional locus of music signals. The music emotion system progressively extracts the features of music and characterizes music-induced emotions in an emotion plane to trace the real-time emotion locus of music. Five feature sets are extracted from the WAV file of music. Feature-weighted scoring algorithms continuously mark the trajectory on the emotion plane. The boundaries of four emotions are demarcated by Gaussian mixture model. A graphic interface represents the tracking of dynamic emotional locus. The music emotion locus and robot movement are integrated and analyzed by the modified Laban movement analysis. The robot controller organized with multi-modal whole-body awareness of music emotions gave rise to robot\'s autonomous locomotion.en_US
dc.language.isoen_USen_US
dc.titleDevelopment of Motion Rendering using Laban Movement Analysis to Humanoid Robots Inspired by Real-Time Emotional Locus of Music Signalsen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN)en_US
dc.citation.spage803en_US
dc.citation.epage808en_US
dc.contributor.department機械工程學系zh_TW
dc.contributor.departmentDepartment of Mechanical Engineeringen_US
dc.identifier.wosnumberWOS:000380393600133en_US
dc.citation.woscount0en_US
顯示於類別:會議論文