File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Convergence analysis of some algorithms for convex optimization from variational analysis perspectives
Title | Convergence analysis of some algorithms for convex optimization from variational analysis perspectives |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Zeng, S. [曾尚志]. (2021). Convergence analysis of some algorithms for convex optimization from variational analysis perspectives. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | In this thesis, we focus on some canonical nonsmooth convex optimization problems, and study the local convergence rates for some benchmark numerical algorithms from variational analysis perspectives.
We first consider the minimization of the sum of two convex functions, one of which is nonsmooth, and discuss how to derive the local linear convergence rates for the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm, and the randomized block coordinate proximal gradient method (R-BCPGM).
A new analytic framework based on the error bound/calmness/metric subregularity condition is proposed for convergence rate analysis.
We then consider a separable convex minimization problem with linear constraints, and apply the new analytic framework to study the alternating direction method of multipliers (ADMM). The local linear convergence rates are derived for the ADMM and its variants.
We also propose a new globally convergent proximal Newton type method for a broad class of nonsmooth composite convex optimization problems without requiring strong convexity assumptions, and establish its local superlinear and quadratic convergence rates under the metric subregularity condition.
Our analysis is based on sophisticated applications of some advanced variational analysis notations and techniques such as the calmness property, the metric subregularity property, the calm intersection theorem and perturbation analysis techniques. |
Degree | Doctor of Philosophy |
Subject | Convex functions Mathematical optimization Calculus of variations |
Dept/Program | Mathematics |
Persistent Identifier | http://hdl.handle.net/10722/325722 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zeng, Shangzhi | - |
dc.contributor.author | 曾尚志 | - |
dc.date.accessioned | 2023-03-02T16:32:18Z | - |
dc.date.available | 2023-03-02T16:32:18Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Zeng, S. [曾尚志]. (2021). Convergence analysis of some algorithms for convex optimization from variational analysis perspectives. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/325722 | - |
dc.description.abstract | In this thesis, we focus on some canonical nonsmooth convex optimization problems, and study the local convergence rates for some benchmark numerical algorithms from variational analysis perspectives. We first consider the minimization of the sum of two convex functions, one of which is nonsmooth, and discuss how to derive the local linear convergence rates for the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm, and the randomized block coordinate proximal gradient method (R-BCPGM). A new analytic framework based on the error bound/calmness/metric subregularity condition is proposed for convergence rate analysis. We then consider a separable convex minimization problem with linear constraints, and apply the new analytic framework to study the alternating direction method of multipliers (ADMM). The local linear convergence rates are derived for the ADMM and its variants. We also propose a new globally convergent proximal Newton type method for a broad class of nonsmooth composite convex optimization problems without requiring strong convexity assumptions, and establish its local superlinear and quadratic convergence rates under the metric subregularity condition. Our analysis is based on sophisticated applications of some advanced variational analysis notations and techniques such as the calmness property, the metric subregularity property, the calm intersection theorem and perturbation analysis techniques. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Convex functions | - |
dc.subject.lcsh | Mathematical optimization | - |
dc.subject.lcsh | Calculus of variations | - |
dc.title | Convergence analysis of some algorithms for convex optimization from variational analysis perspectives | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Mathematics | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2021 | - |
dc.identifier.mmsid | 991044649905503414 | - |