File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: SAGA with Arbitrary Sampling
Title | SAGA with Arbitrary Sampling |
---|---|
Authors | |
Issue Date | 2019 |
Publisher | International Machine Learning Society (IMLS). The Proceedings' web site is located at http://proceedings.mlr.press/ |
Citation | Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 10-15 June 2019. In Proceedings of Machine Learning Research (PMLR), v. 97, p. 5190-5199 How to Cite? |
Abstract | We study the problem of minimizing the average of a very large number of smooth functions, which is of key importance in training supervised learning models. One of the most celebrated methods in this context is the SAGA algorithm of Defazio et al. (2014). Despite years of research on the topic, a general-purpose version of SAGA—one that would include arbitrary importance sampling and minibatching schemes—does not exist. We remedy this situation and propose a general and flexible variant of SAGA following the arbitrary sampling paradigm. We perform an iteration complexity analysis of the method, largely possible due to the construction of new stochastic Lyapunov functions. We establish linear convergence rates in the smooth and strongly convex regime, and under certain error bound conditions also in a regime without strong convexity. Our rates match those of the primal-dual method Quartz (Qu et al., 2015) for which an arbitrary sampling analysis is available, which makes a significant step towards closing the gap in our understanding of complexity of primal and dual methods for finite sum problems. Finally, we show through experiments that specific variants of our general SAGA method can perform better in practice than other competing methods. |
Persistent Identifier | http://hdl.handle.net/10722/275289 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qian, X | - |
dc.contributor.author | Qu, Z | - |
dc.contributor.author | Richtarik, P | - |
dc.date.accessioned | 2019-09-10T02:39:30Z | - |
dc.date.available | 2019-09-10T02:39:30Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 10-15 June 2019. In Proceedings of Machine Learning Research (PMLR), v. 97, p. 5190-5199 | - |
dc.identifier.uri | http://hdl.handle.net/10722/275289 | - |
dc.description.abstract | We study the problem of minimizing the average of a very large number of smooth functions, which is of key importance in training supervised learning models. One of the most celebrated methods in this context is the SAGA algorithm of Defazio et al. (2014). Despite years of research on the topic, a general-purpose version of SAGA—one that would include arbitrary importance sampling and minibatching schemes—does not exist. We remedy this situation and propose a general and flexible variant of SAGA following the arbitrary sampling paradigm. We perform an iteration complexity analysis of the method, largely possible due to the construction of new stochastic Lyapunov functions. We establish linear convergence rates in the smooth and strongly convex regime, and under certain error bound conditions also in a regime without strong convexity. Our rates match those of the primal-dual method Quartz (Qu et al., 2015) for which an arbitrary sampling analysis is available, which makes a significant step towards closing the gap in our understanding of complexity of primal and dual methods for finite sum problems. Finally, we show through experiments that specific variants of our general SAGA method can perform better in practice than other competing methods. | - |
dc.language | eng | - |
dc.publisher | International Machine Learning Society (IMLS). The Proceedings' web site is located at http://proceedings.mlr.press/ | - |
dc.relation.ispartof | Proceedings of Machine Learning Research (PMLR) | - |
dc.relation.ispartof | The 36th International Conference on Machine Learning (ICML 2019) | - |
dc.title | SAGA with Arbitrary Sampling | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Qu, Z: zhengqu@hku.hk | - |
dc.identifier.authority | Qu, Z=rp02096 | - |
dc.identifier.hkuros | 304642 | - |
dc.identifier.volume | 97 | - |
dc.identifier.spage | 5190 | - |
dc.identifier.epage | 5199 | - |
dc.publisher.place | United States | - |