Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 石偉呈 | en_US |
dc.contributor.author | Wei-Chung Shih | en_US |
dc.contributor.author | 蔡文祥 | en_US |
dc.contributor.author | Wen-Hsiang Tsai | en_US |
dc.date.accessioned | 2014-12-12T02:25:13Z | - |
dc.date.available | 2014-12-12T02:25:13Z | - |
dc.date.issued | 2000 | en_US |
dc.identifier.uri | http://140.113.39.130/cdrfb3/record/nctu/#NT890394086 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/66992 | - |
dc.description.abstract | 本論文提出了一個虛擬主播唇形動畫系統。在本研究中,我們使用動作擷取技術來擷取中文基本音節發音所相對應的嘴形。藉由影像處理技術來抽取安置在嘴部附近的特徵點,並使用電腦視覺的技術求出這些特徵點的三維資訊。再根據這些三維資訊,分析出每種嘴形所相對應於人臉模形上的參數,包括下顎轉動角度以及每條肌肉的收縮係數。我們利用上述的嘴形參數分析過程,為四百一十一個中文基本音建立了相對應的嘴形動畫資料庫。在這個研究中,我們採用的了一個以肌肉為主的的臉部模形來模擬逼真的嘴部動畫。我們結合了模擬人類下顎轉動以及嘴部附近肌肉的拉扯效應來合成各式各樣的嘴形。在此,肌肉模形是藉由拉扯臉部模形表面所形成的效果。在所提唇形動畫系統中,我們使用一語音分析程式來獲取一段語音中每個音節的發音長度以及對應的音節編號,我們並使用二種時間戳計來達成語音與唇形動畫的同步效果。實驗結果證明以上的方法確實可行。 | zh_TW |
dc.description.abstract | A system for lip animation for virtual announcer applications is proposed. We use a motion capture method to capture the mouth postures associated with a specific phoneme in Mandarin. Image processing techniques are used to extract feature points around the mouth. Computer vision techniques are used to calculate the 3D information of the feature points. We use this information to analyze related animated parameters, including the constructions of the muscles and the angles of the opening mouth. From this process, the mouth animation elements for 411 base-syllables are constructed. A muscle-based face model is adopted to generate realistic speech animation. We use a physical-based muscle model and a jaw rotation model to synthesize variation of mouth shapes. Here muscles are modeled by using forces to deform the mesh of polygons. In the process of lip animation, a speech recognition system is used to obtain the necessary timed control information and related phonemic indices. We also propose a mechanism to synchronize the speech and the lip animation by two kinds of time stamps. Experimental results show the feasibility and practicability of the proposed methods. | en_US |
dc.language.iso | zh_TW | en_US |
dc.subject | 虛擬播報員 | zh_TW |
dc.subject | 唇形動畫 | zh_TW |
dc.subject | 人臉動畫 | zh_TW |
dc.subject | 人臉表情 | zh_TW |
dc.subject | 動作擷取 | zh_TW |
dc.subject | 語音同步 | zh_TW |
dc.subject | 語音切割 | zh_TW |
dc.subject | virtual announcer | en_US |
dc.subject | lip animation | en_US |
dc.subject | facial animation | en_US |
dc.subject | facial expression | en_US |
dc.subject | motion capture | en_US |
dc.subject | speech synchronization | en_US |
dc.subject | speech segmentation | en_US |
dc.title | 結合語音分析、三維圖學及電腦視覺技術製作虛擬播報員唇形動畫之研究 | zh_TW |
dc.title | A Study on Lip Animation for Virtual Announcers by Combining Voice Analusis, 3D Graphics, and Computer Vision Techniques | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 資訊科學與工程研究所 | zh_TW |
Appears in Collections: | Thesis |