File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis
Title | Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis |
---|---|
Authors | |
Advisors | |
Issue Date | 2021 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Lu, M.. (2021). Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | In this thesis, efforts are devoted to design efficient algorithms for large-scale optimization.
The proximal point algorithm (PPA) and its inexact extensions are known to converge linearly in an asymptotic way if the metric subregularity condition holds for the maximal monotone operator $T$. Under a slightly stronger condition, named bounded metric subregularity, the global linear convergence holds while addressing many relevant problems in application. However, such global linear convergence requires the knowledge of the bounded metric subregularity parameter, which is an unrealistic assumption. We propose an adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on a sequence of implementable checking criteria. We show that AGPPA achieves a linear convergence without any knowledge of the bounded metric subregularity parameter, and that the rate only differs from the optimal one by a logarithmic term.
We apply AGPPA to convex programming. The applications of AGPPA with different termination rules for the inner problems lead to different adaptive proximal augmented Lagrangian methods (APALM). We apply two frequently used termination rules, one of which controls the inexactness of the subgradient value while the other controls the inexactness of the function value. We analyze the iteration complexity bounds of the corresponding APALMs under the bounded metric subregularity condition. Our framework and the complexity results apply to linearly convergent inner solvers, and allows for a hybrid with any locally fast convergent method.
We illustrate the performance of AGPPA by applying it to solve large-scale linear programming (LP) problems. The resulting complexity bound has weaker dependence on the Hoffman constant and scales with the dimension better than linearized ADMM. For numerical acceleration, we design a globally convergent inexact projected semismooth Newton method for the inner problems, and the method has a local super linear convergence under the strict complementarity assumption. Using a hybrid inner solver, our algorithm demonstrates improved performance in obtaining solutions of medium accuracy on large-scale LP data sets. Furthermore, we numerically study the error parameters in AGPPA for the LP problem, which inspires us to relax the inexactness criteria in AGPPA and leads to a more implementable framework, namely relaxed AGPPA (RAGPPA). We also apply AGPPA to solve the large-scale least absolute shrinkage and selection operator (LASSO) problem and, as with the LP problem, we show the advantages of AGPPA from both a theoretical and a numerical perspective. |
Degree | Doctor of Philosophy |
Subject | Convex programming Numerical analysis Mathematical optimization |
Dept/Program | Mathematics |
Persistent Identifier | http://hdl.handle.net/10722/297501 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Qu, Z | - |
dc.contributor.advisor | Zang, W | - |
dc.contributor.author | Lu, Meng | - |
dc.date.accessioned | 2021-03-21T11:37:58Z | - |
dc.date.available | 2021-03-21T11:37:58Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Lu, M.. (2021). Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/297501 | - |
dc.description.abstract | In this thesis, efforts are devoted to design efficient algorithms for large-scale optimization. The proximal point algorithm (PPA) and its inexact extensions are known to converge linearly in an asymptotic way if the metric subregularity condition holds for the maximal monotone operator $T$. Under a slightly stronger condition, named bounded metric subregularity, the global linear convergence holds while addressing many relevant problems in application. However, such global linear convergence requires the knowledge of the bounded metric subregularity parameter, which is an unrealistic assumption. We propose an adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on a sequence of implementable checking criteria. We show that AGPPA achieves a linear convergence without any knowledge of the bounded metric subregularity parameter, and that the rate only differs from the optimal one by a logarithmic term. We apply AGPPA to convex programming. The applications of AGPPA with different termination rules for the inner problems lead to different adaptive proximal augmented Lagrangian methods (APALM). We apply two frequently used termination rules, one of which controls the inexactness of the subgradient value while the other controls the inexactness of the function value. We analyze the iteration complexity bounds of the corresponding APALMs under the bounded metric subregularity condition. Our framework and the complexity results apply to linearly convergent inner solvers, and allows for a hybrid with any locally fast convergent method. We illustrate the performance of AGPPA by applying it to solve large-scale linear programming (LP) problems. The resulting complexity bound has weaker dependence on the Hoffman constant and scales with the dimension better than linearized ADMM. For numerical acceleration, we design a globally convergent inexact projected semismooth Newton method for the inner problems, and the method has a local super linear convergence under the strict complementarity assumption. Using a hybrid inner solver, our algorithm demonstrates improved performance in obtaining solutions of medium accuracy on large-scale LP data sets. Furthermore, we numerically study the error parameters in AGPPA for the LP problem, which inspires us to relax the inexactness criteria in AGPPA and leads to a more implementable framework, namely relaxed AGPPA (RAGPPA). We also apply AGPPA to solve the large-scale least absolute shrinkage and selection operator (LASSO) problem and, as with the LP problem, we show the advantages of AGPPA from both a theoretical and a numerical perspective. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Convex programming | - |
dc.subject.lcsh | Numerical analysis | - |
dc.subject.lcsh | Mathematical optimization | - |
dc.title | Self-adaptive proximal point methods and applications to large-scale optimization : algorithm design and complexity analysis | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Mathematics | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2021 | - |
dc.identifier.mmsid | 991044351387403414 | - |