File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.neucom.2017.07.024
- Scopus: eid_2-s2.0-85026459621
- WOS: WOS:000413821400049
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: On the flexibility of block coordinate descent for large-scale optimization
Title | On the flexibility of block coordinate descent for large-scale optimization |
---|---|
Authors | |
Keywords | Jacobi GaussâSeidel Large-scale optimization Block coordinate descent |
Issue Date | 2018 |
Citation | Neurocomputing, 2018, v. 272, p. 471-480 How to Cite? |
Abstract | © 2017 Elsevier B.V. We consider a large-scale minimization problem (not necessarily convex) with non-smooth separable convex penalty. Problems in this form widely arise in many modern large-scale machine learning and signal processing applications. In this paper, we present a new perspective towards the parallel Block Coordinate Descent (BCD) methods. Specifically we explicitly give a concept of so-called two-layered block variable updating loop for parallel BCD methods in modern computing environment comprised of multiple distributed computing nodes. The outer loop refers to the block variable updating assigned to distributed nodes, and the inner loop involves the updating step inside each node. Each loop allows to adopt either Jacobi or GaussâSeidel update rule. In particular, we give detailed theoretical convergence analysis to two practical schemes: Jacobi/GaussâSeidel and GaussâSeidel/Jacobi that embodies two algorithms respectively. Our new perspective and behind theoretical results help devise parallel BCD algorithms in a principled fashion, which in turn lend them a flexible implementation for BCD methods suited to the parallel computing environment. The effectiveness of the algorithm framework is verified on the benchmark tasks of large-scale â 1 regularized sparse logistic regression and non-negative matrix factorization. |
Persistent Identifier | http://hdl.handle.net/10722/251231 |
ISSN | 2023 Impact Factor: 5.5 2023 SCImago Journal Rankings: 1.815 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Xiangfeng | - |
dc.contributor.author | Zhang, Wenjie | - |
dc.contributor.author | Yan, Junchi | - |
dc.contributor.author | Yuan, Xiaoming | - |
dc.contributor.author | Zha, Hongyuan | - |
dc.date.accessioned | 2018-02-01T01:54:58Z | - |
dc.date.available | 2018-02-01T01:54:58Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Neurocomputing, 2018, v. 272, p. 471-480 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | http://hdl.handle.net/10722/251231 | - |
dc.description.abstract | © 2017 Elsevier B.V. We consider a large-scale minimization problem (not necessarily convex) with non-smooth separable convex penalty. Problems in this form widely arise in many modern large-scale machine learning and signal processing applications. In this paper, we present a new perspective towards the parallel Block Coordinate Descent (BCD) methods. Specifically we explicitly give a concept of so-called two-layered block variable updating loop for parallel BCD methods in modern computing environment comprised of multiple distributed computing nodes. The outer loop refers to the block variable updating assigned to distributed nodes, and the inner loop involves the updating step inside each node. Each loop allows to adopt either Jacobi or GaussâSeidel update rule. In particular, we give detailed theoretical convergence analysis to two practical schemes: Jacobi/GaussâSeidel and GaussâSeidel/Jacobi that embodies two algorithms respectively. Our new perspective and behind theoretical results help devise parallel BCD algorithms in a principled fashion, which in turn lend them a flexible implementation for BCD methods suited to the parallel computing environment. The effectiveness of the algorithm framework is verified on the benchmark tasks of large-scale â 1 regularized sparse logistic regression and non-negative matrix factorization. | - |
dc.language | eng | - |
dc.relation.ispartof | Neurocomputing | - |
dc.subject | Jacobi | - |
dc.subject | GaussâSeidel | - |
dc.subject | Large-scale optimization | - |
dc.subject | Block coordinate descent | - |
dc.title | On the flexibility of block coordinate descent for large-scale optimization | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1016/j.neucom.2017.07.024 | - |
dc.identifier.scopus | eid_2-s2.0-85026459621 | - |
dc.identifier.volume | 272 | - |
dc.identifier.spage | 471 | - |
dc.identifier.epage | 480 | - |
dc.identifier.eissn | 1872-8286 | - |
dc.identifier.isi | WOS:000413821400049 | - |
dc.identifier.issnl | 0925-2312 | - |