File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Conference Paper: Fatigue-aware bandits for dependent click models

TitleFatigue-aware bandits for dependent click models
Authors
Issue Date2020
Citation
AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, 2020, p. 3341-3348 How to Cite?
AbstractAs recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as "fatigue-aware DCM Bandit"problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.
Persistent Identifierhttp://hdl.handle.net/10722/336281

 

DC FieldValueLanguage
dc.contributor.authorCao, Junyu-
dc.contributor.authorSun, Wei-
dc.contributor.authorShen, Zuo Jun-
dc.contributor.authorEttl, Markus-
dc.date.accessioned2024-01-15T08:25:10Z-
dc.date.available2024-01-15T08:25:10Z-
dc.date.issued2020-
dc.identifier.citationAAAI 2020 - 34th AAAI Conference on Artificial Intelligence, 2020, p. 3341-3348-
dc.identifier.urihttp://hdl.handle.net/10722/336281-
dc.description.abstractAs recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as "fatigue-aware DCM Bandit"problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.-
dc.languageeng-
dc.relation.ispartofAAAI 2020 - 34th AAAI Conference on Artificial Intelligence-
dc.titleFatigue-aware bandits for dependent click models-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85106433750-
dc.identifier.spage3341-
dc.identifier.epage3348-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats