Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huang, Cheng-Guo | en_US |
dc.contributor.author | Huang, Tsung-Shian | en_US |
dc.contributor.author | Lin, Wen-Chieh | en_US |
dc.contributor.author | Chuang, Jung-Hong | en_US |
dc.date.accessioned | 2014-12-08T15:30:40Z | - |
dc.date.available | 2014-12-08T15:30:40Z | - |
dc.date.issued | 2013-05-01 | en_US |
dc.identifier.issn | 1546-4261 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1002/cav.1523 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/21897 | - |
dc.description.abstract | Simulating realistic makeup effects is one of the important research issues in the 3D facial animation and cosmetic industry. Existing approaches based on image processing techniques, such as warping and blending, have been mostly applied to transfer one's makeup to another's. Although these approaches are intuitive and need only makeup images, they have some drawbacks, for example, distorted shapes and fixed viewing and lighting conditions. In this paper, we propose an integrated approach, which combines the KubelkaMunk model and a screen-space skin rendering approach, to simulate 3D makeup effects. The KubelkaMunk model is used to compute total transmittance when light passes through cosmetic layers, whereas the screen-space translucent rendering approach simulates the subsurface scattering effects inside human skin. The parameters of KubelkaMunk model are obtained by measuring the optical properties of different cosmetic materials, such as foundations, blushes, and lipsticks. Our results demonstrate that the proposed approach is able to render realistic cosmetic effects on human facial models, and different cosmetic materials and styles can be flexibly applied and simulated in real time. Copyright (c) 2013 John Wiley & Sons, Ltd. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | skin rendering | en_US |
dc.subject | translucent rendering | en_US |
dc.subject | cosmetic rendering | en_US |
dc.title | Physically based cosmetic rendering | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1002/cav.1523 | en_US |
dc.identifier.journal | COMPUTER ANIMATION AND VIRTUAL WORLDS | en_US |
dc.citation.volume | 24 | en_US |
dc.citation.issue | 3-4 | en_US |
dc.citation.spage | 275 | en_US |
dc.citation.epage | 283 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000319003500015 | - |
dc.citation.woscount | 0 | - |
Appears in Collections: | Articles |
Files in This Item:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.