File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2021.3092818
- Scopus: eid_2-s2.0-85110893524
- PMID: 34255628
- WOS: WOS:000690439600001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network
Title | Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network |
---|---|
Authors | |
Keywords | Adaptation models Co-training Domain Adaptation Feature extraction Image recognition Point Cloud Target recognition Task analysis Three-dimensional displays Training |
Issue Date | 2021 |
Citation | IEEE Transactions on Image Processing, 2021, v. 30, p. 7364-7377 How to Cite? |
Abstract | In this work, we propose a novel two-view domain adaptation network named Deep-Shallow Domain Adaptation Network (DSDAN) for 3D point cloud recognition. Different from the traditional 2D image recognition task, the valuable texture information is often absent in point cloud data, making point cloud recognition a challenging task, especially in the cross-dataset scenario where the training and test data exhibit a considerable distribution mismatch. In our DSDAN method, we tackle the challenging cross-dataset 3D point cloud recognition task from two aspects. On one hand, we propose a two-view learning framework, such that we can effectively leverage multiple feature representations to improve the recognition performance. To this end, we propose a simple and efficient Bag-of-Points feature method, as a complementary view to the deep representation. Moreover, we also propose a cross view consistency loss to boost the two-view learning framework. On the other hand, we further propose a two-level adaptation strategy to effectively address the domain distribution mismatch issue. Specifically, we apply a feature-level distribution alignment module for each view, and also propose an instance-level adaptation approach to select highly confident pseudo-labeled target samples for adapting the model to the target domain, based on which a co-training scheme is used to integrate the learning and adaptation process on the two views. Extensive experiments on the benchmark dataset show that our newly proposed DSDAN method outperforms the existing state-of-the-art methods for the cross-dataset point cloud recognition task. |
Persistent Identifier | http://hdl.handle.net/10722/322058 |
ISSN | 2023 Impact Factor: 10.8 2023 SCImago Journal Rankings: 3.556 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Feiyu | - |
dc.contributor.author | Li, Wen | - |
dc.contributor.author | Xu, Dong | - |
dc.date.accessioned | 2022-11-03T02:23:19Z | - |
dc.date.available | 2022-11-03T02:23:19Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Transactions on Image Processing, 2021, v. 30, p. 7364-7377 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10722/322058 | - |
dc.description.abstract | In this work, we propose a novel two-view domain adaptation network named Deep-Shallow Domain Adaptation Network (DSDAN) for 3D point cloud recognition. Different from the traditional 2D image recognition task, the valuable texture information is often absent in point cloud data, making point cloud recognition a challenging task, especially in the cross-dataset scenario where the training and test data exhibit a considerable distribution mismatch. In our DSDAN method, we tackle the challenging cross-dataset 3D point cloud recognition task from two aspects. On one hand, we propose a two-view learning framework, such that we can effectively leverage multiple feature representations to improve the recognition performance. To this end, we propose a simple and efficient Bag-of-Points feature method, as a complementary view to the deep representation. Moreover, we also propose a cross view consistency loss to boost the two-view learning framework. On the other hand, we further propose a two-level adaptation strategy to effectively address the domain distribution mismatch issue. Specifically, we apply a feature-level distribution alignment module for each view, and also propose an instance-level adaptation approach to select highly confident pseudo-labeled target samples for adapting the model to the target domain, based on which a co-training scheme is used to integrate the learning and adaptation process on the two views. Extensive experiments on the benchmark dataset show that our newly proposed DSDAN method outperforms the existing state-of-the-art methods for the cross-dataset point cloud recognition task. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Image Processing | - |
dc.subject | Adaptation models | - |
dc.subject | Co-training | - |
dc.subject | Domain Adaptation | - |
dc.subject | Feature extraction | - |
dc.subject | Image recognition | - |
dc.subject | Point Cloud | - |
dc.subject | Target recognition | - |
dc.subject | Task analysis | - |
dc.subject | Three-dimensional displays | - |
dc.subject | Training | - |
dc.title | Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TIP.2021.3092818 | - |
dc.identifier.pmid | 34255628 | - |
dc.identifier.scopus | eid_2-s2.0-85110893524 | - |
dc.identifier.volume | 30 | - |
dc.identifier.spage | 7364 | - |
dc.identifier.epage | 7377 | - |
dc.identifier.eissn | 1941-0042 | - |
dc.identifier.isi | WOS:000690439600001 | - |