File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization

TitleAn inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
Authors
KeywordsInexact augmented Lagrangian method
Large scale optimization
Randomized first-order method
Explicit inner termination rule
Relative smoothness condition
Issue Date2021
PublisherSpringer. The Journal's web site is located at https://www.springer.com/journal/12532
Citation
Mathematical Programming Computation, 2021, v. 13 n. 3, p. 583-644 How to Cite?
AbstractWe propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing problem dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves O(1/ϵ√) and O(1/ϵ) complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to O~(1/√ϵ) and O~(1/ϵ), which hold both for obtaining ϵ-optimal and ϵ-KKT solution. Within the general framework that we propose, we also obtain O~(1/ϵ) and O~(1/ϵ2) complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale problems.
Persistent Identifierhttp://hdl.handle.net/10722/307877
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.501
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, F-
dc.contributor.authorQu, Z-
dc.date.accessioned2021-11-12T13:39:12Z-
dc.date.available2021-11-12T13:39:12Z-
dc.date.issued2021-
dc.identifier.citationMathematical Programming Computation, 2021, v. 13 n. 3, p. 583-644-
dc.identifier.issn1867-2949-
dc.identifier.urihttp://hdl.handle.net/10722/307877-
dc.description.abstractWe propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing problem dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves O(1/ϵ√) and O(1/ϵ) complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to O~(1/√ϵ) and O~(1/ϵ), which hold both for obtaining ϵ-optimal and ϵ-KKT solution. Within the general framework that we propose, we also obtain O~(1/ϵ) and O~(1/ϵ2) complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale problems.-
dc.languageeng-
dc.publisherSpringer. The Journal's web site is located at https://www.springer.com/journal/12532-
dc.relation.ispartofMathematical Programming Computation-
dc.subjectInexact augmented Lagrangian method-
dc.subjectLarge scale optimization-
dc.subjectRandomized first-order method-
dc.subjectExplicit inner termination rule-
dc.subjectRelative smoothness condition-
dc.titleAn inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization-
dc.typeArticle-
dc.identifier.emailQu, Z: zhengqu@hku.hk-
dc.identifier.authorityQu, Z=rp02096-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s12532-021-00205-x-
dc.identifier.scopuseid_2-s2.0-85110681514-
dc.identifier.hkuros329923-
dc.identifier.volume13-
dc.identifier.issue3-
dc.identifier.spage583-
dc.identifier.epage644-
dc.identifier.isiWOS:000673691400001-
dc.publisher.placeGermany-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats