File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/MSN53354.2021.00043
- WOS: WOS:000817822300027
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Conference Paper: FedHe: Heterogeneous Models and Communication-Efficient Federated Learning
Title | FedHe: Heterogeneous Models and Communication-Efficient Federated Learning |
---|---|
Authors | |
Keywords | federated learning communication efficiency heterogeneous models knowledge distillation asynchronous algorithm |
Issue Date | 2021 |
Publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1002549/all-proceedings |
Citation | Proceedings of the 17th International Conference on Mobility, Sensing and Networking (MSN 2021), Exeter, UK, 13-15 December 2021, p. 207-214 How to Cite? |
Abstract | Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices have similar capabilities and share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency but it is also a difficult problem in the FL environment. In this paper, we propose a novel FL method, called FedHe, inspired by a core idea from knowledge distillation, which can train with heterogeneous models, handle asynchronous training processes, and reduce communication overheads. Our analysis and experimental results demonstrate that the performance of our proposed method is better than the state-of-the-art algorithms in terms of communication overheads and model accuracy. |
Description | Session S4: Federated Learning II |
Persistent Identifier | http://hdl.handle.net/10722/312771 |
ISBN | |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | CHAN, YH | - |
dc.contributor.author | Ngai, CHE | - |
dc.date.accessioned | 2022-05-12T10:55:21Z | - |
dc.date.available | 2022-05-12T10:55:21Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the 17th International Conference on Mobility, Sensing and Networking (MSN 2021), Exeter, UK, 13-15 December 2021, p. 207-214 | - |
dc.identifier.isbn | 9781665406697 | - |
dc.identifier.uri | http://hdl.handle.net/10722/312771 | - |
dc.description | Session S4: Federated Learning II | - |
dc.description.abstract | Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices have similar capabilities and share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency but it is also a difficult problem in the FL environment. In this paper, we propose a novel FL method, called FedHe, inspired by a core idea from knowledge distillation, which can train with heterogeneous models, handle asynchronous training processes, and reduce communication overheads. Our analysis and experimental results demonstrate that the performance of our proposed method is better than the state-of-the-art algorithms in terms of communication overheads and model accuracy. | - |
dc.language | eng | - |
dc.publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1002549/all-proceedings | - |
dc.relation.ispartof | International Conference on Mobility, Sensing and Networking (MSN) Proceedings | - |
dc.rights | International Conference on Mobility, Sensing and Networking (MSN) Proceedings. Copyright © IEEE. | - |
dc.rights | ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | federated learning | - |
dc.subject | communication efficiency | - |
dc.subject | heterogeneous models | - |
dc.subject | knowledge distillation | - |
dc.subject | asynchronous algorithm | - |
dc.title | FedHe: Heterogeneous Models and Communication-Efficient Federated Learning | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Ngai, CHE: chngai@eee.hku.hk | - |
dc.identifier.authority | Ngai, CHE=rp02656 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/MSN53354.2021.00043 | - |
dc.identifier.hkuros | 333051 | - |
dc.identifier.spage | 207 | - |
dc.identifier.epage | 214 | - |
dc.identifier.isi | WOS:000817822300027 | - |
dc.publisher.place | United States | - |