標題: | 以情緒為基礎之自動化遊戲場景配樂產生系統 Emotion-based Automated Music Generation for Game Scenario |
作者: | 顏志豪 Yen, Chih-Hao 黃志方 Huang, Chih-Fang 工學院聲音與音樂創意科技碩士學位學程 |
關鍵字: | 自動作曲;遊戲配樂;情緒;遊戲場景;music generation;game;emotion |
公開日期: | 2011 |
摘要: | 本論文嘗試設計一個建構於情緒平面的畫線系統,讓遊戲設計者可以依照想
帶給玩家的情緒感知,在平面上畫出想產生的情緒音樂線段。系統將根據線段,依照對應之相關情緒的音樂特徵值,及加入一些樂理或經驗,自動產生音樂。另外本論文建構數個隱含情緒的遊戲場景情節,並於遊戲內加入依照情緒音樂線段轉化的場景顏色、以及前述所產生的情緒音樂,測試玩家的情緒反應。本論文的實驗結果,受測者單純聆聽系統自動產生的情緒音樂,於情緒平面上,其四個代表情緒感知程度的象限,在代表開心興奮的第一象限,辨識率為78%,代表生氣緊張的第二象限為33%,代表傷心難過的第三象限為63%,平靜沉穩的第四象限為64%,平均為60%,而只加入遊戲場景不含顏色的階段,結果為70%、26%、66%、86%,平均為62%,再加入情緒線段轉換的合適顏色做為背景,結果為73%、40%、83%、66%,平均為65.5%。首先顯示受測者只聽系統產生之情緒音樂能有及格以上的辨識率,而加入場景和顏色後,其情緒感知辨識率均提升。本系統可以做為提供遊戲設計者的一個配樂產生系統,以及場景顏色使用的參考。 This thesis is to design an emotion-based automated music generation system by drawing a curve on the emotion plane. System can generates music according to some music theory and music features which related to the curve position. This thesis also creates some game scenes which can evoke emotion perception, and adds background by using suited colors converted by the curve. The experiments test subject's emotional perception in four quadrants of the emotion plane by using the emotional music, colors and emotion evoking game. The results showed that subjects who were only listening to music, the emotion recognition rate is 60%, then add the game scenes without the background color, the emotion recognition rate is 62%, and finally add the background colors, the emotion recognition rate increased to 65.5%. This system can provide game designers a soundtrack generation, and the color reference of game scenes. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT079802516 http://hdl.handle.net/11536/46626 |
Appears in Collections: | Thesis |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.