File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TVCG.2018.2886007
- Scopus: eid_2-s2.0-85058889509
- PMID: 30575537
- WOS: WOS:000542933100002
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: CaricatureShop: Personalized and Photorealistic Caricature Sketching
Title | CaricatureShop: Personalized and Photorealistic Caricature Sketching |
---|---|
Authors | |
Keywords | Face Three-dimensional displays Solid modelingTwo dimensional displaysStrain Solid modeling Two dimensional displays |
Issue Date | 2020 |
Publisher | IEEE. The Journal's web site is located at http://www.computer.org/tvcg |
Citation | IEEE Transactions on Visualization and Computer Graphics, 2020, v. 26 n. 7, p. 2349-2361 How to Cite? |
Abstract | In this paper, we propose the first sketching system for interactively personalized and photorealistic face caricaturing. Input an image of a human face, the users can create caricature photos by manipulating its facial feature curves. Our system first performs exaggeration on the recovered 3D face model, which is conducted by assigning the laplacian of each vertex a scaling factor according to the edited sketches. The mapping between 2D sketches and the vertex-wise scaling field is constructed by a novel deep learning architecture. Our approach allows outputting different exaggerations when applying the same sketching on different input figures in term of their different geometric characteristics, which makes the generated results “personalized”. With the obtained 3D caricature model, two images are generated, one obtained by applying 2D warping guided by the underlying 3D mesh deformation and the other obtained by re-rendering the deformed 3D textured model. These two images are then seamlessly integrated to produce our final output. Due to the severe stretching of meshes, the rendered texture is of blurry appearances. A deep learning approach is exploited to infer the missing details for enhancing these blurry regions. Moreover, a relighting operation is invented to further improve the photorealism of the result. These further make our results “photorealistic”. The qualitative experiment results validated the efficiency of our sketching system. |
Persistent Identifier | http://hdl.handle.net/10722/284306 |
ISSN | 2023 Impact Factor: 4.7 2023 SCImago Journal Rankings: 2.056 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | HAN, X | - |
dc.contributor.author | HOU, K | - |
dc.contributor.author | DU, D | - |
dc.contributor.author | QIU, Y | - |
dc.contributor.author | CUI, S | - |
dc.contributor.author | ZHOU, K | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2020-07-20T05:57:40Z | - |
dc.date.available | 2020-07-20T05:57:40Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Transactions on Visualization and Computer Graphics, 2020, v. 26 n. 7, p. 2349-2361 | - |
dc.identifier.issn | 1077-2626 | - |
dc.identifier.uri | http://hdl.handle.net/10722/284306 | - |
dc.description.abstract | In this paper, we propose the first sketching system for interactively personalized and photorealistic face caricaturing. Input an image of a human face, the users can create caricature photos by manipulating its facial feature curves. Our system first performs exaggeration on the recovered 3D face model, which is conducted by assigning the laplacian of each vertex a scaling factor according to the edited sketches. The mapping between 2D sketches and the vertex-wise scaling field is constructed by a novel deep learning architecture. Our approach allows outputting different exaggerations when applying the same sketching on different input figures in term of their different geometric characteristics, which makes the generated results “personalized”. With the obtained 3D caricature model, two images are generated, one obtained by applying 2D warping guided by the underlying 3D mesh deformation and the other obtained by re-rendering the deformed 3D textured model. These two images are then seamlessly integrated to produce our final output. Due to the severe stretching of meshes, the rendered texture is of blurry appearances. A deep learning approach is exploited to infer the missing details for enhancing these blurry regions. Moreover, a relighting operation is invented to further improve the photorealism of the result. These further make our results “photorealistic”. The qualitative experiment results validated the efficiency of our sketching system. | - |
dc.language | eng | - |
dc.publisher | IEEE. The Journal's web site is located at http://www.computer.org/tvcg | - |
dc.relation.ispartof | IEEE Transactions on Visualization and Computer Graphics | - |
dc.rights | IEEE Transactions on Visualization and Computer Graphics. Copyright © IEEE. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Face | - |
dc.subject | Three-dimensional displays | - |
dc.subject | Solid modelingTwo dimensional displaysStrain | - |
dc.subject | Solid modeling | - |
dc.subject | Two dimensional displays | - |
dc.title | CaricatureShop: Personalized and Photorealistic Caricature Sketching | - |
dc.type | Article | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TVCG.2018.2886007 | - |
dc.identifier.pmid | 30575537 | - |
dc.identifier.scopus | eid_2-s2.0-85058889509 | - |
dc.identifier.hkuros | 310936 | - |
dc.identifier.volume | 26 | - |
dc.identifier.issue | 7 | - |
dc.identifier.spage | 2349 | - |
dc.identifier.epage | 2361 | - |
dc.identifier.isi | WOS:000542933100002 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 1077-2626 | - |