File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs

TitleOrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs
Authors
Issue Date2023
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 22939-22950 How to Cite?
AbstractWe present a new method for generating realistic and view-consistent images with fine geometry from 2D image collections. Our method proposes a hybrid explicit-implicit representation called OrthoPlanes, which encodes fine-grained 3D information in feature maps that can be efficiently generated by modifying 2D StyleGANs. Compared to previous representations, our method has better scalability and expressiveness with clear and explicit information. As a result, our method can handle more challenging view-angles and synthesize articulated objects with high spatial degree of freedom. Experiments demonstrate that our method achieves state-of-the-art results on FFHQ and SHHQ datasets, both quantitatively and qualitatively. Project page: https://orthoplanes.github.io/.
Persistent Identifierhttp://hdl.handle.net/10722/352499
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorHe, Honglin-
dc.contributor.authorYang, Zhuoqian-
dc.contributor.authorLi, Shikai-
dc.contributor.authorDai, Bo-
dc.contributor.authorWu, Wayne-
dc.date.accessioned2024-12-16T03:59:28Z-
dc.date.available2024-12-16T03:59:28Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2023, p. 22939-22950-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/352499-
dc.description.abstractWe present a new method for generating realistic and view-consistent images with fine geometry from 2D image collections. Our method proposes a hybrid explicit-implicit representation called OrthoPlanes, which encodes fine-grained 3D information in feature maps that can be efficiently generated by modifying 2D StyleGANs. Compared to previous representations, our method has better scalability and expressiveness with clear and explicit information. As a result, our method can handle more challenging view-angles and synthesize articulated objects with high spatial degree of freedom. Experiments demonstrate that our method achieves state-of-the-art results on FFHQ and SHHQ datasets, both quantitatively and qualitatively. Project page: https://orthoplanes.github.io/.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleOrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV51070.2023.02102-
dc.identifier.scopuseid_2-s2.0-85179386501-
dc.identifier.spage22939-
dc.identifier.epage22950-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats