File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: 3D Human Mesh Regression with Dense Correspondence

Title3D Human Mesh Regression with Dense Correspondence
Authors
KeywordsThree-dimensional displays
Image reconstruction
Solid modeling
Biological system modeling
Surface reconstruction
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147
Citation
Proceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 13-19 June 2020, p. 7052-7061 How to Cite?
AbstractEstimating 3D mesh of the human body from a single 2D image is an important task with many applications such as augmented reality and Human-Robot interaction. However, prior works reconstructed 3D mesh from global image feature extracted by using convolutional neural network (CNN), where the dense correspondences between the mesh surface and the image pixels are missing, leading to suboptimal solution. This paper proposes a model-free 3D human mesh estimation framework, named DecoMR, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space (i.e. a 2D space used for texture mapping of 3D mesh). DecoMR first predicts pixel-to-surface dense correspondence map (i.e., IUV image), with which we transfer local features from the image space to the UV space. Then the transferred local image features are processed in the UV space to regress a location map, which is well aligned with transferred features. Finally we reconstruct 3D human mesh from the regressed location map with a predefined mapping function. We also observe that the existing discontinuous UV map are unfriendly to the learning of network. Therefore, we propose a novel UV map that maintains most of the neighboring relations on the original mesh surface. Experiments demonstrate that our proposed local feature alignment and continuous UV map outperforms existing 3D mesh based methods on multiple public benchmarks. Code will be made available at https://github.com/zengwang430521/DecoMR.
DescriptionSession: Poster 2.2 — Face, Gesture, and Body Pose; Motion and Tracking; Representation Learning - Poster no. 94; Paper ID 6333
CVPR 2020 held virtually due to COVID-19
Persistent Identifierhttp://hdl.handle.net/10722/284160
ISSN
2020 SCImago Journal Rankings: 4.658
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZeng, W-
dc.contributor.authorOuyang, W-
dc.contributor.authorLuo, P-
dc.contributor.authorLiu, W-
dc.contributor.authorWang, X-
dc.date.accessioned2020-07-20T05:56:33Z-
dc.date.available2020-07-20T05:56:33Z-
dc.date.issued2020-
dc.identifier.citationProceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 13-19 June 2020, p. 7052-7061-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/284160-
dc.descriptionSession: Poster 2.2 — Face, Gesture, and Body Pose; Motion and Tracking; Representation Learning - Poster no. 94; Paper ID 6333-
dc.descriptionCVPR 2020 held virtually due to COVID-19-
dc.description.abstractEstimating 3D mesh of the human body from a single 2D image is an important task with many applications such as augmented reality and Human-Robot interaction. However, prior works reconstructed 3D mesh from global image feature extracted by using convolutional neural network (CNN), where the dense correspondences between the mesh surface and the image pixels are missing, leading to suboptimal solution. This paper proposes a model-free 3D human mesh estimation framework, named DecoMR, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space (i.e. a 2D space used for texture mapping of 3D mesh). DecoMR first predicts pixel-to-surface dense correspondence map (i.e., IUV image), with which we transfer local features from the image space to the UV space. Then the transferred local image features are processed in the UV space to regress a location map, which is well aligned with transferred features. Finally we reconstruct 3D human mesh from the regressed location map with a predefined mapping function. We also observe that the existing discontinuous UV map are unfriendly to the learning of network. Therefore, we propose a novel UV map that maintains most of the neighboring relations on the original mesh surface. Experiments demonstrate that our proposed local feature alignment and continuous UV map outperforms existing 3D mesh based methods on multiple public benchmarks. Code will be made available at https://github.com/zengwang430521/DecoMR.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition. Proceedings-
dc.rightsIEEE Conference on Computer Vision and Pattern Recognition. Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectThree-dimensional displays-
dc.subjectImage reconstruction-
dc.subjectSolid modeling-
dc.subjectBiological system modeling-
dc.subjectSurface reconstruction-
dc.title3D Human Mesh Regression with Dense Correspondence-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR42600.2020.00708-
dc.identifier.scopuseid_2-s2.0-85094681073-
dc.identifier.hkuros311020-
dc.identifier.spage7052-
dc.identifier.epage7061-
dc.identifier.isiWOS:000620679507033-
dc.publisher.placeUnited States-
dc.identifier.issnl1063-6919-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats