File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Sequential Point Cloud Upsampling by Exploiting Multi-Scale Temporal Dependency

TitleSequential Point Cloud Upsampling by Exploiting Multi-Scale Temporal Dependency
Authors
Keywordsdynamic point cloud sequences
Point cloud upsampling
spatio-temporal exploration
Issue Date2021
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 12, p. 4686-4696 How to Cite?
AbstractIn this work, we propose a new sequential point cloud upsampling method called SPU, which aims to upsample sparse, non-uniform, and orderless point cloud sequences by effectively exploiting rich and complementary temporal dependency from multiple inputs. Specifically, these inputs include a set of multi-scale short-term features from the 3D points in three consecutive frames (i.e., the previous/current/subsequent frame) and a long-term latent representation accumulated throughout the point cloud sequence. Considering that these temporal clues are not well aligned in the coordinate space, we propose a new temporal alignment module (TAM) based on the cross-attention mechanism to transform each individual feature into the feature space of the current frame. We also propose a new gating mechanism to learn the optimal weights for these transformed features, based on which the transformed features can be effectively aggregated as the final fused feature. The fused feature can be readily fed into the existing single frame-based point cloud upsampling methods (e.g., PU-Net, MPU and PU-GAN) to generate the dense point cloud for the current frame. Comprehensive experiments on three benchmark datasets DYNA, COMA, and MSR Action3D demonstrate the effectiveness of our method for upsampling point cloud sequences.
Persistent Identifierhttp://hdl.handle.net/10722/321975
ISSN
2021 Impact Factor: 5.859
2020 SCImago Journal Rankings: 0.873
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Kaisiyuan-
dc.contributor.authorSheng, Lu-
dc.contributor.authorGu, Shuhang-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:22:44Z-
dc.date.available2022-11-03T02:22:44Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 12, p. 4686-4696-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/321975-
dc.description.abstractIn this work, we propose a new sequential point cloud upsampling method called SPU, which aims to upsample sparse, non-uniform, and orderless point cloud sequences by effectively exploiting rich and complementary temporal dependency from multiple inputs. Specifically, these inputs include a set of multi-scale short-term features from the 3D points in three consecutive frames (i.e., the previous/current/subsequent frame) and a long-term latent representation accumulated throughout the point cloud sequence. Considering that these temporal clues are not well aligned in the coordinate space, we propose a new temporal alignment module (TAM) based on the cross-attention mechanism to transform each individual feature into the feature space of the current frame. We also propose a new gating mechanism to learn the optimal weights for these transformed features, based on which the transformed features can be effectively aggregated as the final fused feature. The fused feature can be readily fed into the existing single frame-based point cloud upsampling methods (e.g., PU-Net, MPU and PU-GAN) to generate the dense point cloud for the current frame. Comprehensive experiments on three benchmark datasets DYNA, COMA, and MSR Action3D demonstrate the effectiveness of our method for upsampling point cloud sequences.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectdynamic point cloud sequences-
dc.subjectPoint cloud upsampling-
dc.subjectspatio-temporal exploration-
dc.titleSequential Point Cloud Upsampling by Exploiting Multi-Scale Temporal Dependency-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2021.3104304-
dc.identifier.scopuseid_2-s2.0-85121024185-
dc.identifier.volume31-
dc.identifier.issue12-
dc.identifier.spage4686-
dc.identifier.epage4696-
dc.identifier.eissn1558-2205-
dc.identifier.isiWOS:000725812500014-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats