File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Hierarchical Split Federated Learning: Convergence Analysis and System Optimization

TitleHierarchical Split Federated Learning: Convergence Analysis and System Optimization
Authors
KeywordsDistributed learning
edge computing
hierarchical split federated learning
model aggregation
model splitting
Issue Date1-Oct-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Mobile Computing, 2025, v. 24, n. 10, p. 9352-9367 How to Cite?
AbstractAs AI models expand in size, it has become increasingly challenging to deploy federated learning (FL) on resource-constrained edge devices. To tackle this issue, split federated learning (SFL) has emerged as an FL framework with reduced workload on edge devices via model splitting; it has received extensive attention from the research community in recent years. Nevertheless, most prior works on SFL focus only on a two-tier architecture without harnessing multi-tier cloud-edge computing resources. In this paper, we intend to analyze and optimize the learning performance of SFL under multi-tier systems. Specifically, we propose the hierarchical SFL (HSFL) framework and derive its convergence bound. Based on the theoretical results, we formulate a joint optimization problem for model splitting (MS) and model aggregation (MA). To solve this rather hard problem, we then decompose it into MS and MA sub-problems that can be solved via an iterative descending algorithm. Simulation results demonstrate that the tailored algorithm can effectively optimize MS and MA in multi-tier systems and significantly outperform existing schemes.
Persistent Identifierhttp://hdl.handle.net/10722/366994
ISSN
2023 Impact Factor: 7.7
2023 SCImago Journal Rankings: 2.755

 

DC FieldValueLanguage
dc.contributor.authorLin, Zheng-
dc.contributor.authorWei, Wei-
dc.contributor.authorChen, Zhe-
dc.contributor.authorLam, Chan Tong-
dc.contributor.authorChen, Xianhao-
dc.contributor.authorGao, Yue-
dc.contributor.authorLuo, Jun-
dc.date.accessioned2025-11-29T00:35:47Z-
dc.date.available2025-11-29T00:35:47Z-
dc.date.issued2025-10-01-
dc.identifier.citationIEEE Transactions on Mobile Computing, 2025, v. 24, n. 10, p. 9352-9367-
dc.identifier.issn1536-1233-
dc.identifier.urihttp://hdl.handle.net/10722/366994-
dc.description.abstractAs AI models expand in size, it has become increasingly challenging to deploy federated learning (FL) on resource-constrained edge devices. To tackle this issue, split federated learning (SFL) has emerged as an FL framework with reduced workload on edge devices via model splitting; it has received extensive attention from the research community in recent years. Nevertheless, most prior works on SFL focus only on a two-tier architecture without harnessing multi-tier cloud-edge computing resources. In this paper, we intend to analyze and optimize the learning performance of SFL under multi-tier systems. Specifically, we propose the hierarchical SFL (HSFL) framework and derive its convergence bound. Based on the theoretical results, we formulate a joint optimization problem for model splitting (MS) and model aggregation (MA). To solve this rather hard problem, we then decompose it into MS and MA sub-problems that can be solved via an iterative descending algorithm. Simulation results demonstrate that the tailored algorithm can effectively optimize MS and MA in multi-tier systems and significantly outperform existing schemes.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Mobile Computing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDistributed learning-
dc.subjectedge computing-
dc.subjecthierarchical split federated learning-
dc.subjectmodel aggregation-
dc.subjectmodel splitting-
dc.titleHierarchical Split Federated Learning: Convergence Analysis and System Optimization-
dc.typeArticle-
dc.identifier.doi10.1109/TMC.2025.3565509-
dc.identifier.scopuseid_2-s2.0-105004400800-
dc.identifier.volume24-
dc.identifier.issue10-
dc.identifier.spage9352-
dc.identifier.epage9367-
dc.identifier.eissn1558-0660-
dc.identifier.issnl1536-1233-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats