File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3539618.3591665
- Scopus: eid_2-s2.0-85164195594
- WOS: WOS:001118084001020
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Disentangled Contrastive Collaborative Filtering
| Title | Disentangled Contrastive Collaborative Filtering |
|---|---|
| Authors | |
| Issue Date | 2023 |
| Citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 1137-1146 How to Cite? |
| Abstract | Recent studies show that graph neural networks (GNNs) are prevalent to model high-order relationships for collaborative filtering (CF). Towards this research line, graph contrastive learning (GCL) has exhibited powerful performance in addressing the supervision label shortage issue by learning augmented user and item representations. While many of them show their effectiveness, two key questions still remain unexplored: i) Most existing GCL-based CF models are still limited by ignoring the fact that user-item interaction behaviors are often driven by diverse latent intent factors (e.g., shopping for family party, preferred color or brand of products); ii) Their introduced non-adaptive augmentation techniques are vulnerable to noisy information, which raises concerns about the model's robustness and the risk of incorporating misleading self-supervised signals. In light of these limitations, we propose a Disentangled Contrastive Collaborative Filtering framework (DCCF) to realize intent disentanglement with self-supervised augmentation in an adaptive fashion. With the learned disentangled representations with global context, our DCCF is able to not only distill finer-grained latent factors from the entangled self-supervision signals but also alleviate the augmentation-induced noise. Finally, the cross-view contrastive learning task is introduced to enable adaptive augmentation with our parameterized interaction mask generator. Experiments on various public datasets demonstrate the superiority of our method compared to existing solutions. Our model implementation is released at the link https://github.com/HKUDS/DCCF. |
| Persistent Identifier | http://hdl.handle.net/10722/355943 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ren, Xubin | - |
| dc.contributor.author | Xia, Lianghao | - |
| dc.contributor.author | Zhao, Jiashu | - |
| dc.contributor.author | Yin, Dawei | - |
| dc.contributor.author | Huang, Chao | - |
| dc.date.accessioned | 2025-05-19T05:46:49Z | - |
| dc.date.available | 2025-05-19T05:46:49Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, p. 1137-1146 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/355943 | - |
| dc.description.abstract | Recent studies show that graph neural networks (GNNs) are prevalent to model high-order relationships for collaborative filtering (CF). Towards this research line, graph contrastive learning (GCL) has exhibited powerful performance in addressing the supervision label shortage issue by learning augmented user and item representations. While many of them show their effectiveness, two key questions still remain unexplored: i) Most existing GCL-based CF models are still limited by ignoring the fact that user-item interaction behaviors are often driven by diverse latent intent factors (e.g., shopping for family party, preferred color or brand of products); ii) Their introduced non-adaptive augmentation techniques are vulnerable to noisy information, which raises concerns about the model's robustness and the risk of incorporating misleading self-supervised signals. In light of these limitations, we propose a Disentangled Contrastive Collaborative Filtering framework (DCCF) to realize intent disentanglement with self-supervised augmentation in an adaptive fashion. With the learned disentangled representations with global context, our DCCF is able to not only distill finer-grained latent factors from the entangled self-supervision signals but also alleviate the augmentation-induced noise. Finally, the cross-view contrastive learning task is introduced to enable adaptive augmentation with our parameterized interaction mask generator. Experiments on various public datasets demonstrate the superiority of our method compared to existing solutions. Our model implementation is released at the link https://github.com/HKUDS/DCCF. | - |
| dc.language | eng | - |
| dc.relation.ispartof | SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval | - |
| dc.title | Disentangled Contrastive Collaborative Filtering | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1145/3539618.3591665 | - |
| dc.identifier.scopus | eid_2-s2.0-85164195594 | - |
| dc.identifier.spage | 1137 | - |
| dc.identifier.epage | 1146 | - |
| dc.identifier.isi | WOS:001118084001020 | - |
