完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHuang, Chih-Fangen_US
dc.contributor.authorCai, Yajunen_US
dc.date.accessioned2018-08-21T05:56:25Z-
dc.date.available2018-08-21T05:56:25Z-
dc.date.issued2018-01-01en_US
dc.identifier.issn2190-3018en_US
dc.identifier.urihttp://dx.doi.org/10.1007/978-3-319-63856-0_14en_US
dc.identifier.urihttp://hdl.handle.net/11536/146187-
dc.description.abstractThis paper proposes an innovated way to compose music automatically according to the input of the heartbeat sensor, to generate music with the correspondent emotion states. The typical 2D emotion plane with arousal and valence (A-V) states are adapted into our system, to determine the generative music features. Algorithmic composition technique including Markov chain is used, with the emotion - music feature mapping method, to compose the desired correspondent music. The result show a pretty good success with various generative music, including sad, happy, joyful, and angry, and the heartbeat values show its good consistency for the correspondent emotion states finally.en_US
dc.language.isoen_USen_US
dc.subject2D emotion planeen_US
dc.subjectArousalen_US
dc.subjectValenceen_US
dc.subjectAlgorithmic compositionen_US
dc.subjectEmotion - music feature mappingen_US
dc.titleAutomated Music Composition Using Heart Rate Emotion Dataen_US
dc.typeProceedings Paperen_US
dc.identifier.doi10.1007/978-3-319-63856-0_14en_US
dc.identifier.journalADVANCES IN INTELLIGENT INFORMATION HIDING AND MULTIMEDIA SIGNAL PROCESSING, PT Ien_US
dc.citation.volume81en_US
dc.citation.spage115en_US
dc.citation.epage120en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000434869600014en_US
顯示於類別:會議論文