File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Progressive Modality Cooperation for Multi-Modality Domain Adaptation

TitleProgressive Modality Cooperation for Multi-Modality Domain Adaptation
Authors
Keywordsadversarial learning
deep learning
Domain adaptation
learning using privileged information (LUPI)
multi-modality learning
self-paced learning
transfer learning
Issue Date2021
Citation
IEEE Transactions on Image Processing, 2021, v. 30, p. 3293-3306 How to Cite?
AbstractIn this work, we propose a new generic multi-modality domain adaptation framework called Progressive Modality Cooperation (PMC) to transfer the knowledge learned from the source domain to the target domain by exploiting multiple modality clues (e.g., RGB and depth) under the multi-modality domain adaptation (MMDA) and the more general multi-modality domain adaptation using privileged information (MMDA-PI) settings. Under the MMDA setting, the samples in both domains have all the modalities. Through effective collaboration among multiple modalities, the two newly proposed modules in our PMC can select the reliable pseudo-labeled target samples, which captures the modality-specific information and modality-integrated information, respectively. Under the MMDA-PI setting, some modalities are missing in the target domain. Hence, to better exploit the multi-modality data in the source domain, we further propose the PMC with privileged information (PMC-PI) method by proposing a new multi-modality data generation (MMG) network. MMG generates the missing modalities in the target domain based on the source domain data by considering both domain distribution mismatch and semantics preservation, which are respectively achieved by using adversarial learning and conditioning on weighted pseudo semantic class labels. Extensive experiments on three image datasets and eight video datasets for various multi-modality cross-domain visual recognition tasks under both MMDA and MMDA-PI settings clearly demonstrate the effectiveness of our proposed PMC framework.
Persistent Identifierhttp://hdl.handle.net/10722/321923
ISSN
2021 Impact Factor: 11.041
2020 SCImago Journal Rankings: 1.778
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Weichen-
dc.contributor.authorXu, Dong-
dc.contributor.authorZhang, Jing-
dc.contributor.authorOuyang, Wanli-
dc.date.accessioned2022-11-03T02:22:23Z-
dc.date.available2022-11-03T02:22:23Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Image Processing, 2021, v. 30, p. 3293-3306-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/321923-
dc.description.abstractIn this work, we propose a new generic multi-modality domain adaptation framework called Progressive Modality Cooperation (PMC) to transfer the knowledge learned from the source domain to the target domain by exploiting multiple modality clues (e.g., RGB and depth) under the multi-modality domain adaptation (MMDA) and the more general multi-modality domain adaptation using privileged information (MMDA-PI) settings. Under the MMDA setting, the samples in both domains have all the modalities. Through effective collaboration among multiple modalities, the two newly proposed modules in our PMC can select the reliable pseudo-labeled target samples, which captures the modality-specific information and modality-integrated information, respectively. Under the MMDA-PI setting, some modalities are missing in the target domain. Hence, to better exploit the multi-modality data in the source domain, we further propose the PMC with privileged information (PMC-PI) method by proposing a new multi-modality data generation (MMG) network. MMG generates the missing modalities in the target domain based on the source domain data by considering both domain distribution mismatch and semantics preservation, which are respectively achieved by using adversarial learning and conditioning on weighted pseudo semantic class labels. Extensive experiments on three image datasets and eight video datasets for various multi-modality cross-domain visual recognition tasks under both MMDA and MMDA-PI settings clearly demonstrate the effectiveness of our proposed PMC framework.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectadversarial learning-
dc.subjectdeep learning-
dc.subjectDomain adaptation-
dc.subjectlearning using privileged information (LUPI)-
dc.subjectmulti-modality learning-
dc.subjectself-paced learning-
dc.subjecttransfer learning-
dc.titleProgressive Modality Cooperation for Multi-Modality Domain Adaptation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2021.3052083-
dc.identifier.pmid33481713-
dc.identifier.scopuseid_2-s2.0-85100482814-
dc.identifier.volume30-
dc.identifier.spage3293-
dc.identifier.epage3306-
dc.identifier.eissn1941-0042-
dc.identifier.isiWOS:000626322500008-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats