標題: 擬真人物角色之動作、肌膚及衣著合成技術之研究---人物角色動作與臉部表情合成技術之研究
A Study on Motion Synthesis of Body and Facial Animation for Human Characters
作者: 林奕成
LIN I-CHEN
國立交通大學資訊工程學系(所)
關鍵字: 人物動作合成;人臉動畫;動態捕捉技術;三維電腦動畫;character motion synthesis;facial animation;motion capture;3D computer animation
公開日期: 2008
摘要: 在電腦動畫的合成中,時常需要針對外在環境的改變、或人物間的互動,產生不同的人物肢體動作。若要以動態捕捉技術產生相對應的動作,則需要表演者針對需求改變而重新拍攝。此外,許多動作受限於捕捉儀器或是難度太高有危險性,無法以動態捕捉直接獲得。為了克服這些問題,在此子計畫中,我們計畫結合動態捕捉技術與力學動作計算的優點,將擷取之動作資料加以分析,產生動作向量空間,並由資料中推估肢體之動力學參數。這些由少量動態捕捉資料中『學習』的參數,可利用向量內插與動力學計算等方式,針對不同條件的改變產生出新的擬真肢體動作。 在臉部表情方面,目前的臉部動態捕捉技術僅能追蹤臉部特徵點,並無法捕捉細微的皺褶變化。在本子計畫中,首先,我們將以近似之表皮模型提升我們所發展之細部表情捕捉技術的精確度。我們並將由推算出特徵點與細微皺褶變化的對應關係,產生由特徵點控制之具有細部表情之人臉動畫模型,如此將可彌補現有臉部動態捕捉技術之不足。 本子計畫的人物動作與表情合成技術,與其他分項子計畫之人物肌肉、衣服形變合成技術以及擬真皮膚、衣服顯像技術相結合,將可加速三維動畫製作流程並大幅提升電腦動畫或遊戲中人物之擬真感。
In the generation of computer animation, motion capture techniques are realistic and feasible approaches to drive characters' motions. However, characters' motions are usually adjusted according to various environment conditions or interact with others' motions. A motion-capture target, called a performer, has to perform considerable times in accordance with the required conditions. On the other hand, many motions that we would like to synthesize are dangerous or difficult to be captured by motion capture devices, e.g. the problem of occlusion. To tackle the abovementioned problems, in this project, we propose combining the advantages of motion-capture-based and physics-based motion synthesis and learning motion controls from motion capture data. We plan to extract motion parameters from captured motion sequences and form motion vector space. By interpolation in vector space or dynamics calculation with extracted parameters, novel motion sequences can be automatically synthesized according to different conditions. In the aspect of facial animation, current facial motion capture devices can only track markers or feature points on a face. They are difficult to handle facial details, such as wrinkles or dimples. In this project, we plan to improve our facial detail capture method by an approximate skin model. After analyzing the correlations between feature points and facial details, a feature-point-driven facial motion model with detail expressions can be proposed to overcome the drawback of current performance-driven facial animation. The integrated project will combine techniques of our character and facial motion synthesis with realistic character skin-muscle and cloth synthesis techniques proposed in other subprojects. It will speed up the generation of 3D character animation and dramatically improve the faithfulness.
官方說明文件#: NSC95-2221-E009-164-MY3
URI: http://hdl.handle.net/11536/102223
https://www.grb.gov.tw/search/planDetail?id=1591241&docId=272889
顯示於類別:研究計畫