File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TNNLS.2020.3028078
- Scopus: eid_2-s2.0-85124053103
- PMID: 33064659
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Domain Adversarial Reinforcement Learning for Partial Domain Adaptation
Title | Domain Adversarial Reinforcement Learning for Partial Domain Adaptation |
---|---|
Authors | |
Keywords | Adversarial learning partial domain adaptation reinforcement learning |
Issue Date | 2022 |
Citation | IEEE Transactions on Neural Networks and Learning Systems, 2022, v. 33, n. 2, p. 539-553 How to Cite? |
Abstract | Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation. |
Persistent Identifier | http://hdl.handle.net/10722/345165 |
ISSN | 2023 Impact Factor: 10.2 2023 SCImago Journal Rankings: 4.170 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jin | - |
dc.contributor.author | Wu, Xinxiao | - |
dc.contributor.author | Duan, Lixin | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:25:39Z | - |
dc.date.available | 2024-08-15T09:25:39Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | IEEE Transactions on Neural Networks and Learning Systems, 2022, v. 33, n. 2, p. 539-553 | - |
dc.identifier.issn | 2162-237X | - |
dc.identifier.uri | http://hdl.handle.net/10722/345165 | - |
dc.description.abstract | Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Neural Networks and Learning Systems | - |
dc.subject | Adversarial learning | - |
dc.subject | partial domain adaptation | - |
dc.subject | reinforcement learning | - |
dc.title | Domain Adversarial Reinforcement Learning for Partial Domain Adaptation | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TNNLS.2020.3028078 | - |
dc.identifier.pmid | 33064659 | - |
dc.identifier.scopus | eid_2-s2.0-85124053103 | - |
dc.identifier.volume | 33 | - |
dc.identifier.issue | 2 | - |
dc.identifier.spage | 539 | - |
dc.identifier.epage | 553 | - |
dc.identifier.eissn | 2162-2388 | - |