Full metadata record
DC FieldValueLanguage
dc.contributor.authorLin, ICen_US
dc.contributor.authorOuhyoung, Men_US
dc.date.accessioned2014-12-08T15:18:50Z-
dc.date.available2014-12-08T15:18:50Z-
dc.date.issued2005-07-01en_US
dc.identifier.issn0178-2789en_US
dc.identifier.urihttp://dx.doi.org/10.1007/s00371-005-0291-5en_US
dc.identifier.urihttp://hdl.handle.net/11536/13540-
dc.description.abstractIn this paper, we present an automatic and efficient approach to the capture of dense facial motion parameters, which extends our previous work of 3D reconstruction from mirror-reflected multiview video. To narrow search space and rapidly generate 3D candidate position lists, we apply mirrored-epipolar bands. For automatic tracking, we utilize spatial proximity of facial surfaces and temporal coherence to find the best trajectories and rectify statuses of missing and false tracking. More than 300 markers on a subject's face are tracked from video at a process speed of 9.2 frames per second (fps) on a regular PC. The estimated 3D facial motion trajectories have been applied to our facial animation system and can be used for facial motion analysis.en_US
dc.language.isoen_USen_US
dc.subjectfacial animationen_US
dc.subjectmotion captureen_US
dc.subjectfacial animation parametersen_US
dc.subjectautomatic trackingen_US
dc.titleMirror MoCap: Automatic and efficient capture of dense 3D facial motion parameters from videoen_US
dc.typeArticleen_US
dc.identifier.doi10.1007/s00371-005-0291-5en_US
dc.identifier.journalVISUAL COMPUTERen_US
dc.citation.volume21en_US
dc.citation.issue6en_US
dc.citation.spage355en_US
dc.citation.epage372en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000230991100001-
dc.citation.woscount10-
Appears in Collections:Articles


Files in This Item:

  1. 000230991100001.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.