File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3572848.3577510
- Scopus: eid_2-s2.0-85149322954
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Swift: Expedited Failure Recovery for Large-Scale DNN Training
Title | Swift: Expedited Failure Recovery for Large-Scale DNN Training |
---|---|
Authors | |
Keywords | distributed DNN training failure resilience |
Issue Date | 21-Feb-2023 |
Abstract | As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance critical. Existing state-of-the-art methods like Check-Freq and Elastic Horovod need to back up a copy of the model state in memory, which is costly for large models and leads to non-trivial overhead. This paper presents Swift, a novel failure recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-the-art methods without degrading final model accuracy. |
Persistent Identifier | http://hdl.handle.net/10722/333891 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhong, Yuchen | - |
dc.contributor.author | Sheng, Guangming | - |
dc.contributor.author | Liu, Juncheng | - |
dc.contributor.author | Yuan, Jinhui | - |
dc.contributor.author | Wu, Chuan | - |
dc.date.accessioned | 2023-10-06T08:39:56Z | - |
dc.date.available | 2023-10-06T08:39:56Z | - |
dc.date.issued | 2023-02-21 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333891 | - |
dc.description.abstract | <p>As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance critical. Existing state-of-the-art methods like Check-Freq and Elastic Horovod need to back up a copy of the model state in memory, which is costly for large models and leads to non-trivial overhead. This paper presents Swift, a novel failure recovery design for distributed deep neural network training that significantly reduces the failure recovery overhead without affecting training throughput and model accuracy. Instead of making an additional copy of the model state, Swift resolves the inconsistencies of the model state caused by the failure and exploits replicas of the model state in data parallelism for failure recovery. We propose a logging-based approach when replicas are unavailable, which records intermediate data and replays the computation to recover the lost state upon a failure. Evaluations show that Swift significantly reduces the failure recovery time and achieves similar or better training throughput during failure-free execution compared to state-of-the-art methods without degrading final model accuracy.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP) (25/02/2023-01/03/2023, Montreal) | - |
dc.subject | distributed DNN training | - |
dc.subject | failure resilience | - |
dc.title | Swift: Expedited Failure Recovery for Large-Scale DNN Training | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1145/3572848.3577510 | - |
dc.identifier.scopus | eid_2-s2.0-85149322954 | - |
dc.identifier.spage | 447 | - |
dc.identifier.epage | 449 | - |