File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICC45041.2023.10279442
- Scopus: eid_2-s2.0-85152950801
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Communication-Efficient Federated Learning with Heterogeneous Devices
Title | Communication-Efficient Federated Learning with Heterogeneous Devices |
---|---|
Authors | |
Keywords | Device scheduling federated Learning knowledge aggregation |
Issue Date | 2023 |
Citation | IEEE International Conference on Communications, 2023, v. 2023-May, p. 3602-3607 How to Cite? |
Abstract | The conventional model aggregation-based federated learning (FL) approaches require all local models to have the same architecture and fail to support practical scenarios with heterogeneous local models. Moreover, the frequent model exchange is costly for resource-limited wireless networks since modern deep neural networks usually have over-million parameters. To tackle these challenges, we first propose a novel knowledge-aided FL (KFL) framework, which aggregates light high-level data features, namely knowledge, in the per-round learning process. The KFL allows devices to design their machine learning models independently and reduces the communication overhead in the training process. We then experimentally show that different temporal device scheduling patterns lead to considerably different learning performance. With this insight, we formulate a stochastic optimization problem for joint device scheduling and bandwidth allocation under limited devices' energy budgets and develop an efficient online algorithm to achieve an energy-learning trade-off in the learning process. Experimental results on the CIFAR-10 dataset show that the proposed KFL can reduce over 87% communication overhead while achieving better learning performance than the baselines. In addition, the proposed device scheduling algorithm converges faster than benchmark scheduling schemes. |
Persistent Identifier | http://hdl.handle.net/10722/349896 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Zhixiong | - |
dc.contributor.author | Yi, Wenqiang | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Nallanathan, Arumugam | - |
dc.date.accessioned | 2024-10-17T07:01:42Z | - |
dc.date.available | 2024-10-17T07:01:42Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | IEEE International Conference on Communications, 2023, v. 2023-May, p. 3602-3607 | - |
dc.identifier.issn | 1550-3607 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349896 | - |
dc.description.abstract | The conventional model aggregation-based federated learning (FL) approaches require all local models to have the same architecture and fail to support practical scenarios with heterogeneous local models. Moreover, the frequent model exchange is costly for resource-limited wireless networks since modern deep neural networks usually have over-million parameters. To tackle these challenges, we first propose a novel knowledge-aided FL (KFL) framework, which aggregates light high-level data features, namely knowledge, in the per-round learning process. The KFL allows devices to design their machine learning models independently and reduces the communication overhead in the training process. We then experimentally show that different temporal device scheduling patterns lead to considerably different learning performance. With this insight, we formulate a stochastic optimization problem for joint device scheduling and bandwidth allocation under limited devices' energy budgets and develop an efficient online algorithm to achieve an energy-learning trade-off in the learning process. Experimental results on the CIFAR-10 dataset show that the proposed KFL can reduce over 87% communication overhead while achieving better learning performance than the baselines. In addition, the proposed device scheduling algorithm converges faster than benchmark scheduling schemes. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE International Conference on Communications | - |
dc.subject | Device scheduling | - |
dc.subject | federated Learning | - |
dc.subject | knowledge aggregation | - |
dc.title | Communication-Efficient Federated Learning with Heterogeneous Devices | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICC45041.2023.10279442 | - |
dc.identifier.scopus | eid_2-s2.0-85152950801 | - |
dc.identifier.volume | 2023-May | - |
dc.identifier.spage | 3602 | - |
dc.identifier.epage | 3607 | - |