File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Online scheduling of heterogeneous distributed machine learning jobs

TitleOnline scheduling of heterogeneous distributed machine learning jobs
Authors
Issue Date2020
PublisherAssociation for Computing Machinery (ACM).
Citation
Proceedings of the Twenty-first ACM International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (Mobihoc '20), Virtual Conference, Boston, MA, USA, 11-14 October 2020, p. 111-120 How to Cite?
AbstractDistributed machine learning (ML) has played a key role in today's proliferation of AI services. A typical model of distributed ML is to partition training datasets over multiple worker nodes to update model parameters in parallel, adopting a parameter server architecture. ML training jobs are typically resource elastic, completed using various time lengths with different resource configurations. A fundamental problem in a distributed ML cluster is how to explore the demand elasticity of ML jobs and schedule them with different resource configurations, such that the utilization of resources is maximized and average job completion time is minimized. To address it, we propose an online scheduling algorithm to decide the execution time window, the number and the type of concurrent workers and parameter servers for each job upon its arrival, with a goal of minimizing the weighted average completion time. Our online algorithm consists of (i) an online scheduling framework that groups unprocessed ML training jobs into a batch iteratively, and (ii) a batch scheduling algorithm that configures each ML job to maximize the total weight of scheduled jobs in the current iteration. Our online algorithm guarantees a good parameterized competitive ratio with polynomial time complexity. Extensive evaluations using real-world data demonstrate that it outperforms state-of-the-art schedulers in today's AI cloud systems.
Persistent Identifierhttp://hdl.handle.net/10722/301417
ISBN

 

DC FieldValueLanguage
dc.contributor.authorZhang, Q-
dc.contributor.authorZhou, R-
dc.contributor.authorWu, C-
dc.contributor.authorJiao, L-
dc.contributor.authorLi, Z-
dc.date.accessioned2021-07-27T08:10:45Z-
dc.date.available2021-07-27T08:10:45Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the Twenty-first ACM International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (Mobihoc '20), Virtual Conference, Boston, MA, USA, 11-14 October 2020, p. 111-120-
dc.identifier.isbn9781450380157-
dc.identifier.urihttp://hdl.handle.net/10722/301417-
dc.description.abstractDistributed machine learning (ML) has played a key role in today's proliferation of AI services. A typical model of distributed ML is to partition training datasets over multiple worker nodes to update model parameters in parallel, adopting a parameter server architecture. ML training jobs are typically resource elastic, completed using various time lengths with different resource configurations. A fundamental problem in a distributed ML cluster is how to explore the demand elasticity of ML jobs and schedule them with different resource configurations, such that the utilization of resources is maximized and average job completion time is minimized. To address it, we propose an online scheduling algorithm to decide the execution time window, the number and the type of concurrent workers and parameter servers for each job upon its arrival, with a goal of minimizing the weighted average completion time. Our online algorithm consists of (i) an online scheduling framework that groups unprocessed ML training jobs into a batch iteratively, and (ii) a batch scheduling algorithm that configures each ML job to maximize the total weight of scheduled jobs in the current iteration. Our online algorithm guarantees a good parameterized competitive ratio with polynomial time complexity. Extensive evaluations using real-world data demonstrate that it outperforms state-of-the-art schedulers in today's AI cloud systems.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM).-
dc.relation.ispartofProceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing-
dc.titleOnline scheduling of heterogeneous distributed machine learning jobs-
dc.typeConference_Paper-
dc.identifier.emailWu, C: cwu@cs.hku.hk-
dc.identifier.authorityWu, C=rp01397-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3397166.3409128-
dc.identifier.scopuseid_2-s2.0-85093915586-
dc.identifier.hkuros323514-
dc.identifier.spage111-
dc.identifier.epage120-
dc.publisher.placeNew York, NY-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats