Full metadata record
DC FieldValueLanguage
dc.contributor.author陳彥勳en_US
dc.contributor.authorYen-Hsun Chenen_US
dc.contributor.author施仁忠en_US
dc.contributor.authorZen-Chung Shihen_US
dc.date.accessioned2014-12-12T02:22:58Z-
dc.date.available2014-12-12T02:22:58Z-
dc.date.issued1999en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#NT880394041en_US
dc.identifier.urihttp://hdl.handle.net/11536/65537-
dc.description.abstract本論文利用一個通用的臉部模型,以及一張臉部正面的照片,讓使用者選取適當位置與數量的特徵點。依據這些特徵點座標來自動調整臉部模型,貼上照片材質之後,便可以產生出與該人物相像的臉部模型。透過所定義的多條臉部肌肉,並且加以參數化,便可控制臉部各區塊的動作,以組合出各種表情。為了模擬出不同人物表情變化的特色,本論文利用動作擷取設備來取得真實臉部表情的資料,並將這些資料轉換成肌肉控制參數。如此便可以透過簡單的幾個參數來模擬出不同人物各種逼真的表情。zh_TW
dc.description.abstractIn this thesis, we use a generic face model and only one specified face photograph taken from the front side of a real human to generate realistic face model. Users can select several feature points on the face image manually. Then the model will be deformed automatically. After face texture is mapped, a specific photo-realistic face model can be constructed. A muscle-based technique is used to control the facial animation. Several muscles are applied on the face model to control face meshes. To personalize the facial expression from people to people, motion capture devices are used. After mapping these motion data to muscle parameters, realistic facial expressions and animations can be generated by controlling the muscle parameters.en_US
dc.language.isoen_USen_US
dc.subject人臉模型建構zh_TW
dc.subject臉部表情zh_TW
dc.subject肌肉控制臉部動畫zh_TW
dc.subject動作擷取zh_TW
dc.subjectface modelingen_US
dc.subjectfacial expressionen_US
dc.subjectmuscle-based facial animationen_US
dc.subjectmotion captureen_US
dc.title人物化身臉部表情的模擬與動畫zh_TW
dc.titleRealistic Modeling and Animation of Avatar Facial Expressionen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
Appears in Collections:Thesis