File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: PU-Net: Point Cloud Upsampling Network

TitlePU-Net: Point Cloud Upsampling Network
Authors
Issue Date2018
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 2790-2799 How to Cite?
AbstractLearning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces.
Persistent Identifierhttp://hdl.handle.net/10722/299580
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYu, Lequan-
dc.contributor.authorLi, Xianzhi-
dc.contributor.authorFu, Chi Wing-
dc.contributor.authorCohen-Or, Daniel-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:43Z-
dc.date.available2021-05-21T03:34:43Z-
dc.date.issued2018-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 2790-2799-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/299580-
dc.description.abstractLearning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titlePU-Net: Point Cloud Upsampling Network-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2018.00295-
dc.identifier.scopuseid_2-s2.0-85055083474-
dc.identifier.spage2790-
dc.identifier.epage2799-
dc.identifier.isiWOS:000457843602095-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats