完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 許丞愷 | zh_TW |
dc.contributor.author | 鄭泗東 | zh_TW |
dc.contributor.author | Hsu, Cheng-Kai | en_US |
dc.contributor.author | Cheng, Stone | en_US |
dc.date.accessioned | 2018-01-24T07:42:05Z | - |
dc.date.available | 2018-01-24T07:42:05Z | - |
dc.date.issued | 2017 | en_US |
dc.identifier.uri | http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070451901 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/142363 | - |
dc.description.abstract | 本研究融合類別式情緒分類法與二維情緒平面作為情緒辨識模型,搭配機器學習技術和音樂訊號處理,建立即時性音樂情緒軌跡追蹤系統,將音樂訊號誘發的情感成份分類,並以視覺化平面呈現樂曲演譯情緒變動的軌跡,本研究亦以此系統分析聲景(Soundscape)所喚起的人類情緒感受,設計混音樂曲,及分析混音後之情緒變動軌跡。實驗過程中蒐集預判情緒標記「Pleasant」、「Solemn」、「Agitated」、「Exuberant」的古典音樂與流行音樂風格的樣本各192首作為兩套訓練資料,從中萃取音量、音樂事件密度、調性、和聲不和諧度和音色以代表音樂樣本的特徵,計算音訊特徵與情緒辨識之關聯性,透過情緒分數計算程序,並使用高斯混合模型(GMM)作為分類器劃定四種情緒類別的邊界,以建立圖像化情緒辨識介面,追蹤由音樂所誘發的人類情緒感受變化。實驗結果證實不同的訓練資料將導致兩個情緒辨識平面的邊界差異。聲景即為人類日常活動場域的聽覺環境,對於人類的情緒狀態、生活品質皆有影響,本研究側重於針對各種場域之商業目的或聽覺環境氣氛營造等需求,提供一套基於情緒辨識與心理聲學的環境聲音設計依據,應用音樂情緒辨識系統至聲景情緒分析之方法,為評估聲景錄音檔與音樂訊號混音後的聲音情緒軌跡變化,以模擬真實場域中,藉由播放背景音樂並善用其情感特性來幫助人類達到情緒狀態改變、轉換心境,進而影響人類的商業行為與決策之研究。 | zh_TW |
dc.description.abstract | This study presents an approach to analyze the inherent emotional ingredients in the polyphonic music signals, and applied to the soundscape emotion analysis. The proposed real-time music emotion trajectory tracking systems are established by maching learning techniques, music signal processing, and the integration of two-dimensional emotion plane and categorical taxonomy as emotion recognition model. Two sets of training data are collected, one is consisted of popular music and the other is consisted of western classical music, each set contains 192 emotion-predefined music clips respectively. Volume, onset density, mode, dissonance, and timbre are extracted to serve as the characteristics of a music excerpt. After emotion score counting process, Gaussian mixture model (GMM) is used to demarcate the margins between four emotion states. A graphical interface with mood locus on emotion plane is established to trace the alteration of music-evoked human emotions. Experimental result verified that different sets of training data would lead to the variation of boundaries among two emotion recognition models. Soundscape specifies the auditory environment of human daily activities, which can affect emotion states and living quality of human beings. This study proposed an access to environmental sound designing based on emotion recognition and psychoacoustic, especially focusing on the needs of various fields for commercial purpose or auditory atmosphere creation. The soundscape study is conducted by evaluating the effectiveness of emotion locus variation of selected urban soundscape sets blending with music signals. The simulation of playing background music in authentic field makes good use of music emotional characteristics to help people alter the emotion states and the state of mind, and further affect human behavior and decision-making. | en_US |
dc.language.iso | zh_TW | en_US |
dc.subject | 音樂情緒辨識 | zh_TW |
dc.subject | 高斯混合模型 | zh_TW |
dc.subject | 聲景 | zh_TW |
dc.subject | Music Emotion Recognition | en_US |
dc.subject | Gaussian Mixture Model (GMM) | en_US |
dc.subject | Soundscape | en_US |
dc.title | 基於不同曲風訓練資料之音樂情緒分類與演繹系統比較及應用於聲景情緒辨識與混音分析 | zh_TW |
dc.title | Comparison of Music Emotion Classification and Interpretation System Based on Different Genre of Training Data and Applied to Soundscape Emotion Recognition and Mixing Audio Analysis | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 工學院聲音與音樂創意科技碩士學位學程 | zh_TW |
顯示於類別: | 畢業論文 |