完整後設資料紀錄
DC 欄位語言
dc.contributor.authorChuang, Chii-Yuanen_US
dc.contributor.authorLin, I-Chenen_US
dc.contributor.authorLo, Yung-Shengen_US
dc.contributor.authorLin, Chao-Chihen_US
dc.date.accessioned2014-12-08T15:11:49Z-
dc.date.available2014-12-08T15:11:49Z-
dc.date.issued2007en_US
dc.identifier.isbn978-972-8865-72-6en_US
dc.identifier.urihttp://hdl.handle.net/11536/9068-
dc.description.abstractProducing a life-like 3D facial expression is usually a labor-intensive process. in movie and game industries, motion capture and 3D scanning techniques, acquiring motion data from real persons, are used to speed up the production. However, acquiring dynamic and subtle details, such as wrinkles, on a face are still difficult or expensive. In this paper, we propose a feature-point-driven approach to synthesize novel expressions with details. Our work can be divided into two main parts: acquisition of 3D facial details and expression synthesis. 3D facial details are estimated from sample images by a shape-from-shading technique. While employing relation between specific feature points and facial surfaces in prototype images, our system provides an intuitive editing tool to synthesize 3D geometry and corresponding 2D textures or 3D detailed normals of novel expressions. Besides expression editing, the proposed method can also be extended to enhance existing motion capture data with facial details.en_US
dc.language.isoen_USen_US
dc.subjectfacial expressionen_US
dc.subjectfacial animationen_US
dc.subjectgraphical interfacesen_US
dc.subjectsurface reconstructionen_US
dc.titleFeature-point driven 3D expression editingen_US
dc.typeProceedings Paperen_US
dc.identifier.journalGRAPP 2007: PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL AS/IEen_US
dc.citation.spage165en_US
dc.citation.epage170en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000252426600021-
顯示於類別:會議論文