File Download
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems
Title | Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems |
---|---|
Authors | |
Issue Date | 12-Jul-2023 |
Abstract | Stochastic optimization has found wide applications in minimizing objective functions in machine learning, which motivates a lot of theoretical studies to understand its practical success. Most of existing studies focus on the convergence of optimization errors, while the generalization analysis of stochastic optimization is much lagging behind. This is especially the case for nonconvex and nonsmooth problems often encountered in practice. In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems. We introduce novel algorithmic stability measures and establish their quantitative connection on the gap between population gradients and empirical gradients, which is then further extended to study the gap between the Moreau envelope of the empirical risk and that of the population risk. To our knowledge, these quantitative connection between stability and generalization in terms of either gradients or Moreau envelopes have not been studied in the literature. We introduce a class of sampling-determined algorithms, for which we develop bounds for three stability measures. Finally, we apply these results to derive error bounds for stochastic gradient descent and its adaptive variant, where we show how to achieve an implicit regularization by tuning the step sizes and the number of iterations. |
Persistent Identifier | http://hdl.handle.net/10722/333737 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lei, Yunwen | - |
dc.date.accessioned | 2023-10-06T08:38:40Z | - |
dc.date.available | 2023-10-06T08:38:40Z | - |
dc.date.issued | 2023-07-12 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333737 | - |
dc.description.abstract | <p>Stochastic optimization has found wide applications in minimizing objective functions in machine learning, which motivates a lot of theoretical studies to understand its practical success. Most of existing studies focus on the convergence of optimization errors, while the generalization analysis of stochastic optimization is much lagging behind. This is especially the case for nonconvex and nonsmooth problems often encountered in practice. In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems. We introduce novel algorithmic stability measures and establish their quantitative connection on the gap between population gradients and empirical gradients, which is then further extended to study the gap between the Moreau envelope of the empirical risk and that of the population risk. To our knowledge, these quantitative connection between stability and generalization in terms of either gradients or Moreau envelopes have not been studied in the literature. We introduce a class of sampling-determined algorithms, for which we develop bounds for three stability measures. Finally, we apply these results to derive error bounds for stochastic gradient descent and its adaptive variant, where we show how to achieve an implicit regularization by tuning the step sizes and the number of iterations.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | 36th Annual Conference on Learning Theory (COLT 2023) (12/07/2023-15/07/2023, Bangalore, India) | - |
dc.title | Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems | - |
dc.type | Conference_Paper | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.48550/arXiv.2206.07082 | - |