File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

TitleHigh-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning
Authors
Keywordsapproximation theory
gradient methods
learning (artificial intelligence)
quantisation (signal)
stochastic processes
Issue Date2019
PublisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1803434
Citation
The 7th IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, Ontario, Canada, 11-14 November 2019, p. 1-5 How to Cite?
AbstractEdge machine learning involves the deployment of machine learning algorithms at the network edge so as to leverage massive mobile data and distributed computation resources. Many edge learning frameworks (e.g., federated learning) have been developed based on distributed gradient descent. Based on the approach, stochastic gradients are computed at edge devices and then transmitted to an edge server for aggregation for updating a global AI model. Since each gradient is typically high-dimensional (with millions to billions of coefficients), communication overhead may become a bottleneck for edge learning. In this work, we propose a novel gradient compression scheme to reduce the aforementioned overhead. Specifically, in the proposed scheme, the norm of the stochastic gradient is quantized using a uniform quantizer while the normalized stochastic gradient is decomposed into block gradients. A Grassmannian codebook is applied to quantizing each normalized block gradients. Their quantized versions are assembled using a so-called hinge vector, which is quantized using another Grassmannian codebook. Furthermore, a practical bit-allocation strategy is developed. By simulations, we show that similar learning performance can be achieved with substantially lower communication overhead as compared to the one-bit scalar quantization schemes used in the state-of-the-art design, namely signed SGD.
Persistent Identifierhttp://hdl.handle.net/10722/290714
ISBN

 

DC FieldValueLanguage
dc.contributor.authorDu, Y-
dc.contributor.authorYang, S-
dc.contributor.authorHuang, K-
dc.date.accessioned2020-11-02T05:46:04Z-
dc.date.available2020-11-02T05:46:04Z-
dc.date.issued2019-
dc.identifier.citationThe 7th IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, Ontario, Canada, 11-14 November 2019, p. 1-5-
dc.identifier.isbn9781728127248-
dc.identifier.urihttp://hdl.handle.net/10722/290714-
dc.description.abstractEdge machine learning involves the deployment of machine learning algorithms at the network edge so as to leverage massive mobile data and distributed computation resources. Many edge learning frameworks (e.g., federated learning) have been developed based on distributed gradient descent. Based on the approach, stochastic gradients are computed at edge devices and then transmitted to an edge server for aggregation for updating a global AI model. Since each gradient is typically high-dimensional (with millions to billions of coefficients), communication overhead may become a bottleneck for edge learning. In this work, we propose a novel gradient compression scheme to reduce the aforementioned overhead. Specifically, in the proposed scheme, the norm of the stochastic gradient is quantized using a uniform quantizer while the normalized stochastic gradient is decomposed into block gradients. A Grassmannian codebook is applied to quantizing each normalized block gradients. Their quantized versions are assembled using a so-called hinge vector, which is quantized using another Grassmannian codebook. Furthermore, a practical bit-allocation strategy is developed. By simulations, we show that similar learning performance can be achieved with substantially lower communication overhead as compared to the one-bit scalar quantization schemes used in the state-of-the-art design, namely signed SGD.-
dc.languageeng-
dc.publisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1803434-
dc.relation.ispartofIEEE Global Conference on Signal and Information Processing (GlobalSIP) Proceedings-
dc.rightsIEEE Global Conference on Signal and Information Processing (GlobalSIP) Proceedings. Copyright © IEEE.-
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectapproximation theory-
dc.subjectgradient methods-
dc.subjectlearning (artificial intelligence)-
dc.subjectquantisation (signal)-
dc.subjectstochastic processes-
dc.titleHigh-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning-
dc.typeConference_Paper-
dc.identifier.emailHuang, K: huangkb@eee.hku.hk-
dc.identifier.authorityHuang, K=rp01875-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/GlobalSIP45357.2019.8969082-
dc.identifier.scopuseid_2-s2.0-85079266840-
dc.identifier.hkuros318020-
dc.identifier.spage1-
dc.identifier.epage5-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats