File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s10915-017-0477-9
- Scopus: eid_2-s2.0-85023203205
- WOS: WOS:000424676600011
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
Title | On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm |
---|---|
Authors | |
Keywords | Linear convergence rate Convex programming Augmented Lagrangian method Proximal point algorithm Alternating direction method of multipliers |
Issue Date | 2017 |
Citation | Journal of Scientific Computing, 2017, p. 1-25 How to Cite? |
Abstract | © 2017 Springer Science+Business Media, LLC The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellarâs condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts. |
Persistent Identifier | http://hdl.handle.net/10722/251225 |
ISSN | 2023 Impact Factor: 2.8 2023 SCImago Journal Rankings: 1.248 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tao, Min | - |
dc.contributor.author | Yuan, Xiaoming | - |
dc.date.accessioned | 2018-02-01T01:54:57Z | - |
dc.date.available | 2018-02-01T01:54:57Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Journal of Scientific Computing, 2017, p. 1-25 | - |
dc.identifier.issn | 0885-7474 | - |
dc.identifier.uri | http://hdl.handle.net/10722/251225 | - |
dc.description.abstract | © 2017 Springer Science+Business Media, LLC The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellarâs condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts. | - |
dc.language | eng | - |
dc.relation.ispartof | Journal of Scientific Computing | - |
dc.subject | Linear convergence rate | - |
dc.subject | Convex programming | - |
dc.subject | Augmented Lagrangian method | - |
dc.subject | Proximal point algorithm | - |
dc.subject | Alternating direction method of multipliers | - |
dc.title | On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/s10915-017-0477-9 | - |
dc.identifier.scopus | eid_2-s2.0-85023203205 | - |
dc.identifier.spage | 1 | - |
dc.identifier.epage | 25 | - |
dc.identifier.isi | WOS:000424676600011 | - |
dc.identifier.issnl | 0885-7474 | - |