File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Swift: Expedited Failure Recovery for Large-Scale DNN Training

TitleSwift: Expedited Failure Recovery for Large-Scale DNN Training
Authors
KeywordsDistributed DNN training
failure resilience
Issue Date1-Sep-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 9, p. 1644-1656 How to Cite?
AbstractAs the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-The-Art methods like CheckFreq and Elastic Horovod need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-Trivial overhead. This article presents Swift, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-The-Art methods without degrading final model accuracy. Swift can also achieve up to 1.16x speedup in total training time compared to state-of-The-Art methods.
Persistent Identifierhttp://hdl.handle.net/10722/359567
ISSN
2023 Impact Factor: 5.6
2023 SCImago Journal Rankings: 2.340

 

DC FieldValueLanguage
dc.contributor.authorZhong, Yuchen-
dc.contributor.authorSheng, Guangming-
dc.contributor.authorLiu, Juncheng-
dc.contributor.authorYuan, Jinhui-
dc.contributor.authorWu, Chuan-
dc.date.accessioned2025-09-08T00:30:14Z-
dc.date.available2025-09-08T00:30:14Z-
dc.date.issued2024-09-01-
dc.identifier.citationIEEE Transactions on Parallel and Distributed Systems, 2024, v. 35, n. 9, p. 1644-1656-
dc.identifier.issn1045-9219-
dc.identifier.urihttp://hdl.handle.net/10722/359567-
dc.description.abstractAs the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-The-Art methods like CheckFreq and Elastic Horovod need to back up a copy of the model state (i.e., parameters and optimizer states) in memory, which is costly for large models and leads to non-Trivial overhead. This article presents Swift, a novel recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits the replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. The re-computation is distributed across multiple machines to accelerate failure recovery further. We also log intermediate data selectively, exploring the trade-off between recovery time and intermediate data storage overhead. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-The-Art methods without degrading final model accuracy. Swift can also achieve up to 1.16x speedup in total training time compared to state-of-The-Art methods.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Parallel and Distributed Systems-
dc.subjectDistributed DNN training-
dc.subjectfailure resilience-
dc.titleSwift: Expedited Failure Recovery for Large-Scale DNN Training-
dc.typeArticle-
dc.identifier.doi10.1109/TPDS.2024.3429625-
dc.identifier.scopuseid_2-s2.0-85199082415-
dc.identifier.volume35-
dc.identifier.issue9-
dc.identifier.spage1644-
dc.identifier.epage1656-
dc.identifier.eissn1558-2183-
dc.identifier.issnl1045-9219-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats