File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: End-to-end optimization of scene layout

TitleEnd-to-end optimization of scene layout
Authors
Issue Date2020
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 3753-3762 How to Cite?
AbstractWe propose an end-to-end variational generative model for scene layout synthesis conditioned on scene graphs. Unlike unconditional scene layout generation, we use scene graphs as an abstract but general representation to guide the synthesis of diverse scene layouts that satisfy relationships included in the scene graph. This gives rise to more flexible control over the synthesis process, allowing various forms of inputs such as scene layouts extracted from sentences or inferred from a single color image. Using our conditional layout synthesizer, we can generate various layouts that share the same structure of the input example. In addition to this conditional generation design, we also integrate a differentiable rendering module that enables layout refinement using only 2D projections of the scene. Given a depth and a semantics map, the differentiable rendering module enables optimizing over the synthesized layout to fit the given input in an analysis-by-synthesis fashion. Experiments suggest that our model achieves higher accuracy and diversity in conditional scene synthesis and allows exemplar-based scene generation from various input forms.
Persistent Identifierhttp://hdl.handle.net/10722/352215
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorLuo, Andrew-
dc.contributor.authorZhang, Zhoutong-
dc.contributor.authorWu, Jiajun-
dc.contributor.authorTenenbaum, Joshua B.-
dc.date.accessioned2024-12-16T03:57:22Z-
dc.date.available2024-12-16T03:57:22Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 3753-3762-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352215-
dc.description.abstractWe propose an end-to-end variational generative model for scene layout synthesis conditioned on scene graphs. Unlike unconditional scene layout generation, we use scene graphs as an abstract but general representation to guide the synthesis of diverse scene layouts that satisfy relationships included in the scene graph. This gives rise to more flexible control over the synthesis process, allowing various forms of inputs such as scene layouts extracted from sentences or inferred from a single color image. Using our conditional layout synthesizer, we can generate various layouts that share the same structure of the input example. In addition to this conditional generation design, we also integrate a differentiable rendering module that enables layout refinement using only 2D projections of the scene. Given a depth and a semantics map, the differentiable rendering module enables optimizing over the synthesized layout to fit the given input in an analysis-by-synthesis fashion. Experiments suggest that our model achieves higher accuracy and diversity in conditional scene synthesis and allows exemplar-based scene generation from various input forms.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleEnd-to-end optimization of scene layout-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR42600.2020.00381-
dc.identifier.scopuseid_2-s2.0-85094569564-
dc.identifier.spage3753-
dc.identifier.epage3762-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats