File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Representation Learning with Large Language Models for Recommendation

TitleRepresentation Learning with Large Language Models for Recommendation
Authors
Keywordsalignment
large language models
recommendation
Issue Date2024
Citation
WWW 2024 - Proceedings of the ACM Web Conference, 2024, p. 3464-3475 How to Cite?
AbstractRecommender systems have seen significant advancements with the influence of deep learning and graph neural networks, particularly in capturing complex user-item relationships. However, these graph-based recommenders heavily depend on ID-based data, potentially disregarding valuable textual information associated with users and items, resulting in less informative learned representations. Moreover, the utilization of implicit feedback data introduces potential noise and bias, posing challenges for the effectiveness of user preference learning. While the integration of large language models (LLMs) into traditional ID-based recommenders has gained attention, challenges such as scalability issues, limitations in text-only reliance, and prompt input constraints need to be addressed for effective implementation in practical recommender systems. To address these challenges, we propose a model-agnostic framework RLMRec that aims to enhance existing recommenders with LLM-empowered representation learning. It proposes a recommendation paradigm that integrates representation learning with LLMs to capture intricate semantic aspects of user behaviors and preferences. RLMRec incorporates auxiliary textual signals, employs LLMs for user/item profiling, and aligns the semantic space of LLMs with collaborative relational signals through cross-view alignment. This work further demonstrates the theoretical foundation of incorporating textual signals through mutual information maximization, which improves the quality of representations. Our evaluation integrates RLMRec with state-of-the-art recommender models, while also analyzing its efficiency and robustness to noise data. Implementation codes are available at https://github.com/HKUDS/RLMRec.
Persistent Identifierhttp://hdl.handle.net/10722/355962

 

DC FieldValueLanguage
dc.contributor.authorRen, Xubin-
dc.contributor.authorWei, Wei-
dc.contributor.authorXia, Lianghao-
dc.contributor.authorSu, Lixin-
dc.contributor.authorCheng, Suqi-
dc.contributor.authorWang, Junfeng-
dc.contributor.authorYin, Dawei-
dc.contributor.authorHuang, Chao-
dc.date.accessioned2025-05-19T05:46:55Z-
dc.date.available2025-05-19T05:46:55Z-
dc.date.issued2024-
dc.identifier.citationWWW 2024 - Proceedings of the ACM Web Conference, 2024, p. 3464-3475-
dc.identifier.urihttp://hdl.handle.net/10722/355962-
dc.description.abstractRecommender systems have seen significant advancements with the influence of deep learning and graph neural networks, particularly in capturing complex user-item relationships. However, these graph-based recommenders heavily depend on ID-based data, potentially disregarding valuable textual information associated with users and items, resulting in less informative learned representations. Moreover, the utilization of implicit feedback data introduces potential noise and bias, posing challenges for the effectiveness of user preference learning. While the integration of large language models (LLMs) into traditional ID-based recommenders has gained attention, challenges such as scalability issues, limitations in text-only reliance, and prompt input constraints need to be addressed for effective implementation in practical recommender systems. To address these challenges, we propose a model-agnostic framework RLMRec that aims to enhance existing recommenders with LLM-empowered representation learning. It proposes a recommendation paradigm that integrates representation learning with LLMs to capture intricate semantic aspects of user behaviors and preferences. RLMRec incorporates auxiliary textual signals, employs LLMs for user/item profiling, and aligns the semantic space of LLMs with collaborative relational signals through cross-view alignment. This work further demonstrates the theoretical foundation of incorporating textual signals through mutual information maximization, which improves the quality of representations. Our evaluation integrates RLMRec with state-of-the-art recommender models, while also analyzing its efficiency and robustness to noise data. Implementation codes are available at https://github.com/HKUDS/RLMRec.-
dc.languageeng-
dc.relation.ispartofWWW 2024 - Proceedings of the ACM Web Conference-
dc.subjectalignment-
dc.subjectlarge language models-
dc.subjectrecommendation-
dc.titleRepresentation Learning with Large Language Models for Recommendation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3589334.3645458-
dc.identifier.scopuseid_2-s2.0-85187884583-
dc.identifier.spage3464-
dc.identifier.epage3475-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats