File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Accelerating Partitioned Edge Learning via Joint Parameter-and-Bandwidth Allocation

TitleAccelerating Partitioned Edge Learning via Joint Parameter-and-Bandwidth Allocation
Authors
KeywordsTraining
Wireless networks
Stochastic processes
Resource management
Channel allocation
Issue Date2020
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000308
Citation
Proceedings of GLOBECOM 2020 - 2020 IEEE Global Communications Conference, Virtual Conference, Taipei, Taiwan, 7-11 December 2020, p. 1-6 How to Cite?
AbstractIn this paper, we consider the framework of partitioned edge learning for iteratively training a large-scale model using many resource-constrained devices (called workers). To this end, in each iteration, the model is dynamically partitioned into parametric blocks, which are downloaded to worker groups for updating using their local data. Then, the local updates are uploaded to and cascaded by the server for updating a global model. To reduce resource usage by minimizing the total learning-and-communication latency, this work focuses on the novel joint design of parameter (computation load) and bandwidth allocation (for downloading and uploading). Two design approaches are adopted. First, a practical sequential approach, called partially integrated parameter-and-bandwidth allocation (PABA), yields one scheme, namely parameter aware bandwidth allocation. It allocates the largest bandwidth to the slowest worker. Second, PABA are jointly optimized. Despite its being a nonconvex problem, an efficient and optimal solution algorithm is derived by intelligently nesting a bisection search and solving a convex problem. Experimental results using real data demonstrate that integrating PABA can substantially improve the performance of partitioned edge learning in terms of latency (by e.g., 46%) and accuracy (by e.g., 4%).
Persistent Identifierhttp://hdl.handle.net/10722/295865
ISSN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWen, D-
dc.contributor.authorBennis, M-
dc.contributor.authorHuang, K-
dc.date.accessioned2021-02-08T08:15:08Z-
dc.date.available2021-02-08T08:15:08Z-
dc.date.issued2020-
dc.identifier.citationProceedings of GLOBECOM 2020 - 2020 IEEE Global Communications Conference, Virtual Conference, Taipei, Taiwan, 7-11 December 2020, p. 1-6-
dc.identifier.issn2334-0983-
dc.identifier.urihttp://hdl.handle.net/10722/295865-
dc.description.abstractIn this paper, we consider the framework of partitioned edge learning for iteratively training a large-scale model using many resource-constrained devices (called workers). To this end, in each iteration, the model is dynamically partitioned into parametric blocks, which are downloaded to worker groups for updating using their local data. Then, the local updates are uploaded to and cascaded by the server for updating a global model. To reduce resource usage by minimizing the total learning-and-communication latency, this work focuses on the novel joint design of parameter (computation load) and bandwidth allocation (for downloading and uploading). Two design approaches are adopted. First, a practical sequential approach, called partially integrated parameter-and-bandwidth allocation (PABA), yields one scheme, namely parameter aware bandwidth allocation. It allocates the largest bandwidth to the slowest worker. Second, PABA are jointly optimized. Despite its being a nonconvex problem, an efficient and optimal solution algorithm is derived by intelligently nesting a bisection search and solving a convex problem. Experimental results using real data demonstrate that integrating PABA can substantially improve the performance of partitioned edge learning in terms of latency (by e.g., 46%) and accuracy (by e.g., 4%).-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000308-
dc.relation.ispartofIEEE Global Communications Conference (GLOBECOM)-
dc.rightsIEEE Global Communications Conference (GLOBECOM). Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectTraining-
dc.subjectWireless networks-
dc.subjectStochastic processes-
dc.subjectResource management-
dc.subjectChannel allocation-
dc.titleAccelerating Partitioned Edge Learning via Joint Parameter-and-Bandwidth Allocation-
dc.typeConference_Paper-
dc.identifier.emailHuang, K: huangkb@eee.hku.hk-
dc.identifier.authorityHuang, K=rp01875-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/GLOBECOM42002.2020.9347992-
dc.identifier.scopuseid_2-s2.0-85100873814-
dc.identifier.hkuros321259-
dc.identifier.spage1-
dc.identifier.epage6-
dc.identifier.isiWOS:000668970503150-
dc.publisher.placeUnited States-
dc.identifier.issnl2334-0983-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats