File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning to reconstruct 3D manhattan wireframes from a single image

TitleLearning to reconstruct 3D manhattan wireframes from a single image
Authors
Issue Date2019
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 7697-7706 How to Cite?
AbstractFrom a single view of an urban environment, we propose a method to effectively exploit the global structural regularities for obtaining a compact, accurate, and intuitive 3D wireframe representation. Our method trains a single convolutional neural network to simultaneously detect salient junctions and straight lines, as well as predict their 3D depth and vanishing points. Compared with state-of-the-art learning-based wireframe detection methods, our network is much simpler and more unified, leading to better 2D wireframe detection. With a global structural prior (such as Manhattan assumption), our method further reconstructs a full 3D wireframe model, a compact vector representation suitable for a variety of high-level vision tasks such as AR and CAD. We conduct extensive evaluations of our method on a large new synthetic dataset of urban scenes as well as real images. Our code and datasets will be published along with the paper.
Persistent Identifierhttp://hdl.handle.net/10722/327757
ISSN
2023 SCImago Journal Rankings: 12.263
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhou, Yichao-
dc.contributor.authorQi, Haozhi-
dc.contributor.authorZhai, Yuexiang-
dc.contributor.authorSun, Qi-
dc.contributor.authorChen, Zhili-
dc.contributor.authorWei, Li Yi-
dc.contributor.authorMa, Yi-
dc.date.accessioned2023-05-08T02:26:36Z-
dc.date.available2023-05-08T02:26:36Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 7697-7706-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/327757-
dc.description.abstractFrom a single view of an urban environment, we propose a method to effectively exploit the global structural regularities for obtaining a compact, accurate, and intuitive 3D wireframe representation. Our method trains a single convolutional neural network to simultaneously detect salient junctions and straight lines, as well as predict their 3D depth and vanishing points. Compared with state-of-the-art learning-based wireframe detection methods, our network is much simpler and more unified, leading to better 2D wireframe detection. With a global structural prior (such as Manhattan assumption), our method further reconstructs a full 3D wireframe model, a compact vector representation suitable for a variety of high-level vision tasks such as AR and CAD. We conduct extensive evaluations of our method on a large new synthetic dataset of urban scenes as well as real images. Our code and datasets will be published along with the paper.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleLearning to reconstruct 3D manhattan wireframes from a single image-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2019.00779-
dc.identifier.scopuseid_2-s2.0-85081915565-
dc.identifier.volume2019-October-
dc.identifier.spage7697-
dc.identifier.epage7706-
dc.identifier.isiWOS:000548549202079-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats