File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Last-Iterate Convergence in No-Regret Learning: Games with Reference Effects Under Logit Demand

TitleLast-Iterate Convergence in No-Regret Learning: Games with Reference Effects Under Logit Demand
Authors
Issue Date21-May-2025
PublisherInstitute for Operations Research and Management Sciences
Citation
Management Science, 2025 How to Cite?
Abstract

This work examines the behaviors of the online projected gradient ascent (OPGA) algorithm and its variant in a repeated oligopoly price competition under reference effects. In particular, we consider that multiple firms engage in a multiperiod price competition, where consecutive periods are linked by the reference price update and each firm has access only to its own first-order feedback. Consumers assess their willingness to pay by comparing the current price against the memory-based reference price, and their choices follow the multinomial logit (MNL) model. We use the notion of stationary Nash equilibrium (SNE), defined as the fixed point of the equilibrium pricing policy, to simultaneously capture the long-run equilibrium and stability. We first study the loss-neutral reference effects and show that if the firms employ the OPGA algorithm—adjusting the price using the first-order derivatives of their log-revenues—the price and reference price paths attain last-iterate convergence to the unique SNE, thereby guaranteeing the no-regret learning and market stability. Moreover, with appropriate step-sizes, we prove that this algorithm exhibits a convergence rate of ̃𝒪⁢(1/𝑡2) in terms of the squared distance and achieves a constant dynamic regret. Despite the simplicity of the algorithm, its convergence analysis is challenging due to the model lacking typical properties such as strong monotonicity and variational stability that are ordinarily used for the convergence analysis of online games. The inherent asymmetry nature of reference effects motivates the exploration beyond loss-neutrality. When loss-averse reference effects are introduced, we propose a variant of the original algorithm named the conservative-OPGA (C-OPGA) to handle the nonsmooth revenue functions and show that the price and reference price achieve last-iterate convergence to the set of SNEs with the rate of 𝒪⁢(1/√𝑡). Finally, we demonstrate the practicality and robustness of OPGA and C-OPGA by theoretically showing that these algorithms can also adapt to firm-differentiated step-sizes and inexact gradients.


Persistent Identifierhttp://hdl.handle.net/10722/368591
ISSN
2023 Impact Factor: 4.6
2023 SCImago Journal Rankings: 5.438

 

DC FieldValueLanguage
dc.contributor.authorGuo, Amy Mengzi-
dc.contributor.authorYing, Donghao-
dc.contributor.authorLavaei, Javad-
dc.contributor.authorShen, Max Zuo-Jun-
dc.date.accessioned2026-01-15T00:35:25Z-
dc.date.available2026-01-15T00:35:25Z-
dc.date.issued2025-05-21-
dc.identifier.citationManagement Science, 2025-
dc.identifier.issn0025-1909-
dc.identifier.urihttp://hdl.handle.net/10722/368591-
dc.description.abstract<p>This work examines the behaviors of the online projected gradient ascent (OPGA) algorithm and its variant in a repeated oligopoly price competition under reference effects. In particular, we consider that multiple firms engage in a multiperiod price competition, where consecutive periods are linked by the reference price update and each firm has access only to its own first-order feedback. Consumers assess their willingness to pay by comparing the current price against the memory-based reference price, and their choices follow the multinomial logit (MNL) model. We use the notion of stationary Nash equilibrium (SNE), defined as the fixed point of the equilibrium pricing policy, to simultaneously capture the long-run equilibrium and stability. We first study the loss-neutral reference effects and show that if the firms employ the OPGA algorithm—adjusting the price using the first-order derivatives of their log-revenues—the price and reference price paths attain last-iterate convergence to the unique SNE, thereby guaranteeing the no-regret learning and market stability. Moreover, with appropriate step-sizes, we prove that this algorithm exhibits a convergence rate of ̃𝒪⁢(1/𝑡2) in terms of the squared distance and achieves a constant dynamic regret. Despite the simplicity of the algorithm, its convergence analysis is challenging due to the model lacking typical properties such as strong monotonicity and variational stability that are ordinarily used for the convergence analysis of online games. The inherent asymmetry nature of reference effects motivates the exploration beyond loss-neutrality. When loss-averse reference effects are introduced, we propose a variant of the original algorithm named the conservative-OPGA (C-OPGA) to handle the nonsmooth revenue functions and show that the price and reference price achieve last-iterate convergence to the set of SNEs with the rate of 𝒪⁢(1/√𝑡). Finally, we demonstrate the practicality and robustness of OPGA and C-OPGA by theoretically showing that these algorithms can also adapt to firm-differentiated step-sizes and inexact gradients.<br></p>-
dc.languageeng-
dc.publisherInstitute for Operations Research and Management Sciences-
dc.relation.ispartofManagement Science-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleLast-Iterate Convergence in No-Regret Learning: Games with Reference Effects Under Logit Demand-
dc.typeArticle-
dc.identifier.doi10.1287/mnsc.2023.03464-
dc.identifier.eissn1526-5501-
dc.identifier.issnl0025-1909-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats