File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1080/10618600.2022.2143786
- Scopus: eid_2-s2.0-85144185039
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models
Title | An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models |
---|---|
Authors | |
Keywords | Fixed minibatch Gradient descent Momentum method Numerical convergence rate Shuffled minibatch Statistical efficiency |
Issue Date | 1-Jan-2023 |
Publisher | Taylor and Francis Group |
Citation | Journal of Computational and Graphical Statistics, 2023, v. 32, n. 3, p. 1083-1096 How to Cite? |
Abstract | Momentum methods have been shown to accelerate the convergence of the standard gradient descent algorithm in practice and theory. In particular, the random partition based minibatch gradient descent methods with momentum (MGDM) are widely used to solve large-scale optimization problems with massive datasets. Despite the great popularity of the MGDM methods in practice, their theoretical properties are still underexplored. To this end, we investigate the theoretical properties of MGDM methods based on the linear regression models. We first study the numerical convergence properties of the MGDM algorithm and derive the conditions for faster numerical convergence rate. In addition, we explore the relationship between the statistical properties of the resulting MGDM estimator and the tuning parameters. Based on these theoretical findings, we give the conditions for the resulting estimator to achieve the optimal statistical efficiency. Finally, extensive numerical experiments are conducted to verify our theoretical results. Supplementary materials for this article are available online. |
Persistent Identifier | http://hdl.handle.net/10722/344884 |
ISSN | 2023 Impact Factor: 1.4 2023 SCImago Journal Rankings: 1.530 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gao, Yuan | - |
dc.contributor.author | Zhu, Xuening | - |
dc.contributor.author | Qi, Haobo | - |
dc.contributor.author | Li, Guodong | - |
dc.contributor.author | Zhang, Riquan | - |
dc.contributor.author | Wang, Hansheng | - |
dc.date.accessioned | 2024-08-12T04:08:07Z | - |
dc.date.available | 2024-08-12T04:08:07Z | - |
dc.date.issued | 2023-01-01 | - |
dc.identifier.citation | Journal of Computational and Graphical Statistics, 2023, v. 32, n. 3, p. 1083-1096 | - |
dc.identifier.issn | 1061-8600 | - |
dc.identifier.uri | http://hdl.handle.net/10722/344884 | - |
dc.description.abstract | Momentum methods have been shown to accelerate the convergence of the standard gradient descent algorithm in practice and theory. In particular, the random partition based minibatch gradient descent methods with momentum (MGDM) are widely used to solve large-scale optimization problems with massive datasets. Despite the great popularity of the MGDM methods in practice, their theoretical properties are still underexplored. To this end, we investigate the theoretical properties of MGDM methods based on the linear regression models. We first study the numerical convergence properties of the MGDM algorithm and derive the conditions for faster numerical convergence rate. In addition, we explore the relationship between the statistical properties of the resulting MGDM estimator and the tuning parameters. Based on these theoretical findings, we give the conditions for the resulting estimator to achieve the optimal statistical efficiency. Finally, extensive numerical experiments are conducted to verify our theoretical results. Supplementary materials for this article are available online. | - |
dc.language | eng | - |
dc.publisher | Taylor and Francis Group | - |
dc.relation.ispartof | Journal of Computational and Graphical Statistics | - |
dc.subject | Fixed minibatch | - |
dc.subject | Gradient descent | - |
dc.subject | Momentum method | - |
dc.subject | Numerical convergence rate | - |
dc.subject | Shuffled minibatch | - |
dc.subject | Statistical efficiency | - |
dc.title | An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models | - |
dc.type | Article | - |
dc.identifier.doi | 10.1080/10618600.2022.2143786 | - |
dc.identifier.scopus | eid_2-s2.0-85144185039 | - |
dc.identifier.volume | 32 | - |
dc.identifier.issue | 3 | - |
dc.identifier.spage | 1083 | - |
dc.identifier.epage | 1096 | - |
dc.identifier.eissn | 1537-2715 | - |
dc.identifier.issnl | 1061-8600 | - |