標題: 機器人情感模型及情感辨識設計
Design of Robotic Emotion Model and Human Emotion Recognition
作者: 韓孟儒
Han, Meng-Ju
宋開泰
Song, Kai-Tai
電控工程研究所
關鍵字: 機器人心情變遷;情感模型;行為融合;情感互動;表情產生;表情辨識;robotic mood state transition;emotional model;behavior fusion;emotional interactions;facial expression generation;emotion recognition
公開日期: 2013
摘要: 本論文之主旨在研究機器人情感模型(emotion model)及其互動設計,文中提出一種擬人化之心情變遷(mood transition)設計方法,提高機器人與人類作自主情感互動之能力。為使機器人能產生富有類人情感表達之互動行為,本論文提出一個二維的情感模型,同時考慮機器人之情感(emotion)、心情(mood)與人格特性(personality)等狀態,使產生擬人化情感反應。在本設計中,機器人之人格特性模型建立係參考心理學家所提出之五大性格特質(Big Five factor)來達成,而機器人之心情變遷所造成之影響,則可藉由此五大人格特質參數來決定。 為能經由連續的互動行為來呈現機器人自主情感狀態,本論文亦提出一種可融合基本情緒行為之方法,來建立不同心情狀態下之行為表達方式。根據上述之心理學研究成果,本研究以模糊Kohonen群集網路(fuzzy Kohonen clustering networks)之方法,將人格特性、心情與情緒行為三者整合成一情感模型,使之能具體實現於機器人上。與其他研究相比,具有客觀之學理依據,而非憑藉研究人員本身主觀經驗來做假設。 在情感辨識方面,本論文提出結合影像與聲音之雙模情緒辨識以及語音情緒辨識等二種方法,使機器人可辨識使用者之情緒狀態。在雙模情緒辨識之設計中,論文中提出基於支持向量機(support vector machine)之分類特性與機率策略,用以決定二種特徵資料之融合權重(fusing weights)。融合權重係根據待測資料與切割平面之距離,以及學習樣本之標準差所決定。而在分類階段,融合權重較高之特徵所辨識之結果,將成為最後系統辨識之結果。此外,在語音情緒辨識之設計中,本論文提出採用聲音訊號進行處理與分類。首先,在預處理時先將語音訊號進行端點偵測(end-point detection)以取得音框所在位置,而後再以統計方式將能量計算成特徵之型態,並以費雪線性辨別分析法(Fisher's linear discriminant analysis)來增強辨識率。 本論文寫作基於DSP之影像、語音處理系統驗證所發展的辨識方法,並整合至機器人上展示與人情感互動的功能。為了評估所開發之情感模型,文中並建立一人臉模擬器展示情緒表情之變化。為了解所提方法對於使用者之感受,本研究透過觀察人臉模擬器對使用者之情感表達狀況,以問卷調查方式來作評估。評估結果顯示,受訪者之感受與原設計目標相符。
This thesis aims to develop a robotic emotion model and mood transition method for autonomous emotional interaction with human. A two-dimensional (2-D) emotional model is proposed to combine robotic emotion, mood and personality in order to generate emotional behaviors. In this design, the robot personality is programmed by adjusting the big five factors referred from psychology. Using Big Five personality traits, the influence factors of robot mood transition are analyzed. A method to fuse basic robotic emotional behaviors is proposed in this work in order to manifest robotic emotional states via continuous facial expressions. Through reference psychological results, we developed the relationships of personality vs. mood transition for robotic emotion generation. Based on these relationships, personality, mood transition and emotional behaviors have been integrated into the robotic emotion model. Comparing with existing models, the proposed method has the merit of having a theoretical basis to support the human-robot interaction design. In order to recognize the user’s emotional state, both bimodal emotion recognition and speech-signal-based emotion recognition methods are studied. In the design of the bimodal emotion recognition system, a novel probabilistic strategy has been proposed for a classification design to determine statistically suitable fusing weights for two feature modalities. The fusion weights are selected by the distance between test data and the classification hyperplane and the standard deviation of training samples. In the latter bimodal SVM classification, the recognition result with higher weight is selected. In the design of the proposed speech-signal-based emotion recognition method, the proposed method uses voice signal processing and classification. Firstly, end-point detection and frame setting are accomplished in the pre-processing stage. Then, the statistical features of the energy contour are computed. Fisher's linear discriminant analysis (FLDA) is used to enhance the recognition rate. In this thesis, the proposed emotion recognition methods have been implemented on a DSP-based system in order to demonstrate the functionality of human-robot interaction. We have realized an artificial face simulator to show the effectiveness of the proposed methods. Questionnaire surveys have been carried out to evaluate the effectiveness of the proposed emotional model by observing robotic responses to user’s emotional expressions. Evaluation results show that the feelings of the testers coincide with the original design.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079212817
http://hdl.handle.net/11536/40366
Appears in Collections:Thesis


Files in This Item:

  1. 281701.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.