File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks

TitleIntegrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks
Authors
Keywordsdata integration
hierarchical reinforcement learning
online learning
queue stability
two-timescale stochastic optimization
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Wireless Communications, 2025, v. 24, n. 3, p. 2606-2619 How to Cite?
Abstract

With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.


Persistent Identifierhttp://hdl.handle.net/10722/361986
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371

 

DC FieldValueLanguage
dc.contributor.authorWang, Nan-
dc.contributor.authorTeng, Yinglei-
dc.contributor.authorHuang, Kaibin-
dc.date.accessioned2025-09-18T00:36:03Z-
dc.date.available2025-09-18T00:36:03Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2025, v. 24, n. 3, p. 2606-2619-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/361986-
dc.description.abstract<p>With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectdata integration-
dc.subjecthierarchical reinforcement learning-
dc.subjectonline learning-
dc.subjectqueue stability-
dc.subjecttwo-timescale stochastic optimization-
dc.titleIntegrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks-
dc.typeArticle-
dc.identifier.doi10.1109/TWC.2024.3522956-
dc.identifier.scopuseid_2-s2.0-85216882657-
dc.identifier.volume24-
dc.identifier.issue3-
dc.identifier.spage2606-
dc.identifier.epage2619-
dc.identifier.eissn1558-2248-
dc.identifier.issnl1536-1276-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats