File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPDS.2024.3429625
- Scopus: eid_2-s2.0-85199082415
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Swift: Expedited Failure Recovery for Large-Scale DNN Training
| Title | Swift: Expedited Failure Recovery for Large-Scale DNN Training |
|---|---|
| Authors | |
| Keywords | Distributed DNN training failure resilience |
| Issue Date | 1-Sep-2024 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 9, p. 1644-1656 How to Cite? |
| Abstract | As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-The-Art methods like CheckFreq and Elastic Horovod need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-Trivial overhead. This article presents Swift, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-The-Art methods without degrading final model accuracy. Swift can also achieve up to 1.16x speedup in total training time compared to state-of-The-Art methods. |
| Persistent Identifier | http://hdl.handle.net/10722/359567 |
| ISSN | 2023 Impact Factor: 5.6 2023 SCImago Journal Rankings: 2.340 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhong, Yuchen | - |
| dc.contributor.author | Sheng, Guangming | - |
| dc.contributor.author | Liu, Juncheng | - |
| dc.contributor.author | Yuan, Jinhui | - |
| dc.contributor.author | Wu, Chuan | - |
| dc.date.accessioned | 2025-09-08T00:30:14Z | - |
| dc.date.available | 2025-09-08T00:30:14Z | - |
| dc.date.issued | 2024-09-01 | - |
| dc.identifier.citation | IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 9, p. 1644-1656 | - |
| dc.identifier.issn | 1045-9219 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/359567 | - |
| dc.description.abstract | As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-The-Art methods like CheckFreq and Elastic Horovod need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-Trivial overhead. This article presents Swift, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-The-Art methods without degrading final model accuracy. Swift can also achieve up to 1.16x speedup in total training time compared to state-of-The-Art methods. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Parallel and Distributed Systems | - |
| dc.subject | Distributed DNN training | - |
| dc.subject | failure resilience | - |
| dc.title | Swift: Expedited Failure Recovery for Large-Scale DNN Training | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TPDS.2024.3429625 | - |
| dc.identifier.scopus | eid_2-s2.0-85199082415 | - |
| dc.identifier.volume | 35 | - |
| dc.identifier.issue | 9 | - |
| dc.identifier.spage | 1644 | - |
| dc.identifier.epage | 1656 | - |
| dc.identifier.eissn | 1558-2183 | - |
| dc.identifier.issnl | 1045-9219 | - |
