File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Multi-Modal Self-Supervised Learning for Recommendation

TitleMulti-Modal Self-Supervised Learning for Recommendation
Authors
KeywordsMulti-Modal Recommendation
Self-Supervised Learning
Issue Date2023
Citation
ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023, 2023, p. 790-800 How to Cite?
AbstractThe online emergence of multi-modal sharing platforms (e.g., TikTok, Youtube) is powering personalized recommender systems to incorporate various modalities (e.g., visual, textual and acoustic) into the latent user representations. While existing works on multi-modal recommendation exploit multimedia content features in enhancing item embeddings, their model representation capability is limited by heavy label reliance and weak robustness on sparse user behavior data. Inspired by the recent progress of self-supervised learning in alleviating label scarcity issue, we explore deriving self-supervision signals with effectively learning of modality-aware user preference and cross-modal dependencies. To this end, we propose a new Multi-Modal Self-Supervised Learning (MMSSL) method which tackles two key challenges. Specifically, to characterize the inter-dependency between the user-item collaborative view and item multi-modal semantic view, we design a modality-aware interactive structure learning paradigm via adversarial perturbations for data augmentation. In addition, to capture the effects that user's modality-aware interaction pattern would interweave with each other, a cross-modal contrastive learning approach is introduced to jointly preserve the inter-modal semantic commonality and user preference diversity. Experiments on real-world datasets verify the superiority of our method in offering great potential for multimedia recommendation over various state-of-the-art baselines. The implementation is released at: https://github.com/HKUDS/MMSSL.
Persistent Identifierhttp://hdl.handle.net/10722/355938

 

DC FieldValueLanguage
dc.contributor.authorWei, Wei-
dc.contributor.authorHuang, Chao-
dc.contributor.authorXia, Lianghao-
dc.contributor.authorZhang, Chuxu-
dc.date.accessioned2025-05-19T05:46:47Z-
dc.date.available2025-05-19T05:46:47Z-
dc.date.issued2023-
dc.identifier.citationACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023, 2023, p. 790-800-
dc.identifier.urihttp://hdl.handle.net/10722/355938-
dc.description.abstractThe online emergence of multi-modal sharing platforms (e.g., TikTok, Youtube) is powering personalized recommender systems to incorporate various modalities (e.g., visual, textual and acoustic) into the latent user representations. While existing works on multi-modal recommendation exploit multimedia content features in enhancing item embeddings, their model representation capability is limited by heavy label reliance and weak robustness on sparse user behavior data. Inspired by the recent progress of self-supervised learning in alleviating label scarcity issue, we explore deriving self-supervision signals with effectively learning of modality-aware user preference and cross-modal dependencies. To this end, we propose a new Multi-Modal Self-Supervised Learning (MMSSL) method which tackles two key challenges. Specifically, to characterize the inter-dependency between the user-item collaborative view and item multi-modal semantic view, we design a modality-aware interactive structure learning paradigm via adversarial perturbations for data augmentation. In addition, to capture the effects that user's modality-aware interaction pattern would interweave with each other, a cross-modal contrastive learning approach is introduced to jointly preserve the inter-modal semantic commonality and user preference diversity. Experiments on real-world datasets verify the superiority of our method in offering great potential for multimedia recommendation over various state-of-the-art baselines. The implementation is released at: https://github.com/HKUDS/MMSSL.-
dc.languageeng-
dc.relation.ispartofACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023-
dc.subjectMulti-Modal Recommendation-
dc.subjectSelf-Supervised Learning-
dc.titleMulti-Modal Self-Supervised Learning for Recommendation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3543507.3583206-
dc.identifier.scopuseid_2-s2.0-85159378843-
dc.identifier.spage790-
dc.identifier.epage800-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats