File Download
Supplementary

postgraduate thesis: Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis

TitleSelf-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis
Authors
Advisors
Advisor(s):Qu, ZZang, W
Issue Date2021
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Lu, M.. (2021). Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractIn this thesis, efforts are devoted to design efficient algorithms for large-scale optimization. The proximal point algorithm (PPA) and its inexact extensions are known to converge linearly in an asymptotic way if the metric subregularity condition holds for the maximal monotone operator $T$. Under a slightly stronger condition, named bounded metric subregularity, the global linear convergence holds while addressing many relevant problems in application. However, such global linear convergence requires the knowledge of the bounded metric subregularity parameter, which is an unrealistic assumption. We propose an adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on a sequence of implementable checking criteria. We show that AGPPA achieves a linear convergence without any knowledge of the bounded metric subregularity parameter, and that the rate only differs from the optimal one by a logarithmic term. We apply AGPPA to convex programming. The applications of AGPPA with different termination rules for the inner problems lead to different adaptive proximal augmented Lagrangian methods (APALM). We apply two frequently used termination rules, one of which controls the inexactness of the subgradient value while the other controls the inexactness of the function value. We analyze the iteration complexity bounds of the corresponding APALMs under the bounded metric subregularity condition. Our framework and the complexity results apply to linearly convergent inner solvers, and allows for a hybrid with any locally fast convergent method. We illustrate the performance of AGPPA by applying it to solve large-scale linear programming (LP) problems. The resulting complexity bound has weaker dependence on the Hoffman constant and scales with the dimension better than linearized ADMM. For numerical acceleration, we design a globally convergent inexact projected semismooth Newton method for the inner problems, and the method has a local super linear convergence under the strict complementarity assumption. Using a hybrid inner solver, our algorithm demonstrates improved performance in obtaining solutions of medium accuracy on large-scale LP data sets. Furthermore, we numerically study the error parameters in AGPPA for the LP problem, which inspires us to relax the inexactness criteria in AGPPA and leads to a more implementable framework, namely relaxed AGPPA (RAGPPA). We also apply AGPPA to solve the large-scale least absolute shrinkage and selection operator (LASSO) problem and, as with the LP problem, we show the advantages of AGPPA from both a theoretical and a numerical perspective.
DegreeDoctor of Philosophy
SubjectConvex programming
Numerical analysis
Mathematical optimization
Dept/ProgramMathematics
Persistent Identifierhttp://hdl.handle.net/10722/297501

 

DC FieldValueLanguage
dc.contributor.advisorQu, Z-
dc.contributor.advisorZang, W-
dc.contributor.authorLu, Meng-
dc.date.accessioned2021-03-21T11:37:58Z-
dc.date.available2021-03-21T11:37:58Z-
dc.date.issued2021-
dc.identifier.citationLu, M.. (2021). Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/297501-
dc.description.abstractIn this thesis, efforts are devoted to design efficient algorithms for large-scale optimization. The proximal point algorithm (PPA) and its inexact extensions are known to converge linearly in an asymptotic way if the metric subregularity condition holds for the maximal monotone operator $T$. Under a slightly stronger condition, named bounded metric subregularity, the global linear convergence holds while addressing many relevant problems in application. However, such global linear convergence requires the knowledge of the bounded metric subregularity parameter, which is an unrealistic assumption. We propose an adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on a sequence of implementable checking criteria. We show that AGPPA achieves a linear convergence without any knowledge of the bounded metric subregularity parameter, and that the rate only differs from the optimal one by a logarithmic term. We apply AGPPA to convex programming. The applications of AGPPA with different termination rules for the inner problems lead to different adaptive proximal augmented Lagrangian methods (APALM). We apply two frequently used termination rules, one of which controls the inexactness of the subgradient value while the other controls the inexactness of the function value. We analyze the iteration complexity bounds of the corresponding APALMs under the bounded metric subregularity condition. Our framework and the complexity results apply to linearly convergent inner solvers, and allows for a hybrid with any locally fast convergent method. We illustrate the performance of AGPPA by applying it to solve large-scale linear programming (LP) problems. The resulting complexity bound has weaker dependence on the Hoffman constant and scales with the dimension better than linearized ADMM. For numerical acceleration, we design a globally convergent inexact projected semismooth Newton method for the inner problems, and the method has a local super linear convergence under the strict complementarity assumption. Using a hybrid inner solver, our algorithm demonstrates improved performance in obtaining solutions of medium accuracy on large-scale LP data sets. Furthermore, we numerically study the error parameters in AGPPA for the LP problem, which inspires us to relax the inexactness criteria in AGPPA and leads to a more implementable framework, namely relaxed AGPPA (RAGPPA). We also apply AGPPA to solve the large-scale least absolute shrinkage and selection operator (LASSO) problem and, as with the LP problem, we show the advantages of AGPPA from both a theoretical and a numerical perspective.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshConvex programming-
dc.subject.lcshNumerical analysis-
dc.subject.lcshMathematical optimization-
dc.titleSelf-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineMathematics-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2021-
dc.identifier.mmsid991044351387403414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats