File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks

TitleEfficient Parallel Split Learning over Resource-constrained Wireless Edge Networks
Authors
KeywordsComputational modeling
Data models
Distributed learning
edge computing
edge intelligence
Internet of Things
Optimization
Resource management
resource management
Servers
split learning
Training
Issue Date26-Jan-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Mobile Computing, 2024, v. 23, n. 10, p. 9224-9239 How to Cite?
AbstractThe increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple edge devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and a large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of activations' gradients for backpropagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at edge devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the state-of-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization.
Persistent Identifierhttp://hdl.handle.net/10722/348434
ISSN
2023 Impact Factor: 7.7
2023 SCImago Journal Rankings: 2.755

 

DC FieldValueLanguage
dc.contributor.authorLin, Zheng-
dc.contributor.authorZhu, Guangyu-
dc.contributor.authorDeng, Yiqin-
dc.contributor.authorChen, Xianhao-
dc.contributor.authorGao, Yue-
dc.contributor.authorHuang, Kaibin-
dc.contributor.authorFang, Yuguang-
dc.date.accessioned2024-10-09T00:31:29Z-
dc.date.available2024-10-09T00:31:29Z-
dc.date.issued2024-01-26-
dc.identifier.citationIEEE Transactions on Mobile Computing, 2024, v. 23, n. 10, p. 9224-9239-
dc.identifier.issn1536-1233-
dc.identifier.urihttp://hdl.handle.net/10722/348434-
dc.description.abstractThe increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple edge devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and a large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of activations' gradients for backpropagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at edge devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the state-of-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Mobile Computing-
dc.subjectComputational modeling-
dc.subjectData models-
dc.subjectDistributed learning-
dc.subjectedge computing-
dc.subjectedge intelligence-
dc.subjectInternet of Things-
dc.subjectOptimization-
dc.subjectResource management-
dc.subjectresource management-
dc.subjectServers-
dc.subjectsplit learning-
dc.subjectTraining-
dc.titleEfficient Parallel Split Learning over Resource-constrained Wireless Edge Networks-
dc.typeArticle-
dc.identifier.doi10.1109/TMC.2024.3359040-
dc.identifier.scopuseid_2-s2.0-85183940068-
dc.identifier.volume23-
dc.identifier.issue10-
dc.identifier.spage9224-
dc.identifier.epage9239-
dc.identifier.eissn1558-0660-
dc.identifier.issnl1536-1233-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats