File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multi-scale Online Learning: Theory and Applications to Online Auctions and Pricing

TitleMulti-scale Online Learning: Theory and Applications to Online Auctions and Pricing
Authors
Keywordsonline learning
multi-scale learning
auction theory
bandit information
sample complexity
Issue Date2019
PublisherJournal of Machine Learning Research. The Journal's web site is located at http://mitpress.mit.edu/jmlr
Citation
Journal of Machine Learning Research, 2019, v. 20 n. 62, p. 1-37 How to Cite?
AbstractWe consider revenue maximization in online auction/pricing problems. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online pricing problem, both when the arriving buyer bids or only responds to the posted price, we design algorithms whose regret bounds scale with the best fixed price in-hindsight, rather than the range of the values. Under the bidding model, we further show our algorithms achieve a revenue convergence rate that matches the offline sample complexity of the single-item single-buyer auction. We also show regret bounds that are scale free, and match the offline sample complexity, when comparing to a benchmark that requires a lower bound on the market share. We further expand our results beyond pricing to multi-buyer auctions, and obtain online learning algorithms for auctions, with convergence rates matching the known sample complexity upper bound of online single-item multi-buyer auctions. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their multi-scale versions. In this version, the reward of each action is in a different range, and the regret with respect to a given action scales with its own range, rather than the maximum range. We obtain almost optimal multi-scale regret bounds by introducing a new Online Mirror Descent (OMD) algorithm whose mirror map is the multi-scale version of the negative entropy function. We further generalize to the bandit setting by introducing the stochastic variant of this OMD algorithm.
Persistent Identifierhttp://hdl.handle.net/10722/273145
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorBubeck, S-
dc.contributor.authorDevanur, N-
dc.contributor.authorHuang, Z-
dc.contributor.authorNiazadeh, R-
dc.date.accessioned2019-08-06T09:23:22Z-
dc.date.available2019-08-06T09:23:22Z-
dc.date.issued2019-
dc.identifier.citationJournal of Machine Learning Research, 2019, v. 20 n. 62, p. 1-37-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/273145-
dc.description.abstractWe consider revenue maximization in online auction/pricing problems. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online pricing problem, both when the arriving buyer bids or only responds to the posted price, we design algorithms whose regret bounds scale with the best fixed price in-hindsight, rather than the range of the values. Under the bidding model, we further show our algorithms achieve a revenue convergence rate that matches the offline sample complexity of the single-item single-buyer auction. We also show regret bounds that are scale free, and match the offline sample complexity, when comparing to a benchmark that requires a lower bound on the market share. We further expand our results beyond pricing to multi-buyer auctions, and obtain online learning algorithms for auctions, with convergence rates matching the known sample complexity upper bound of online single-item multi-buyer auctions. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their multi-scale versions. In this version, the reward of each action is in a different range, and the regret with respect to a given action scales with its own range, rather than the maximum range. We obtain almost optimal multi-scale regret bounds by introducing a new Online Mirror Descent (OMD) algorithm whose mirror map is the multi-scale version of the negative entropy function. We further generalize to the bandit setting by introducing the stochastic variant of this OMD algorithm.-
dc.languageeng-
dc.publisherJournal of Machine Learning Research. The Journal's web site is located at http://mitpress.mit.edu/jmlr-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectonline learning-
dc.subjectmulti-scale learning-
dc.subjectauction theory-
dc.subjectbandit information-
dc.subjectsample complexity-
dc.titleMulti-scale Online Learning: Theory and Applications to Online Auctions and Pricing-
dc.typeArticle-
dc.identifier.emailHuang, Z: zhiyi@cs.hku.hk-
dc.identifier.authorityHuang, Z=rp01804-
dc.description.naturepublished_or_final_version-
dc.identifier.hkuros299983-
dc.identifier.volume20-
dc.identifier.issue62-
dc.identifier.spage1-
dc.identifier.epage37-
dc.identifier.isiWOS:000467879600001-
dc.publisher.placeUnited States-
dc.identifier.issnl1532-4435-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats