File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TWC.2024.3522956
- Scopus: eid_2-s2.0-85216882657
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks
| Title | Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks |
|---|---|
| Authors | |
| Keywords | data integration hierarchical reinforcement learning online learning queue stability two-timescale stochastic optimization |
| Issue Date | 1-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Wireless Communications, 2025, v. 24, n. 3, p. 2606-2619 How to Cite? |
| Abstract | With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system. |
| Persistent Identifier | http://hdl.handle.net/10722/361986 |
| ISSN | 2023 Impact Factor: 8.9 2023 SCImago Journal Rankings: 5.371 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wang, Nan | - |
| dc.contributor.author | Teng, Yinglei | - |
| dc.contributor.author | Huang, Kaibin | - |
| dc.date.accessioned | 2025-09-18T00:36:03Z | - |
| dc.date.available | 2025-09-18T00:36:03Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Transactions on Wireless Communications, 2025, v. 24, n. 3, p. 2606-2619 | - |
| dc.identifier.issn | 1536-1276 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361986 | - |
| dc.description.abstract | <p>With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.</p> | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Wireless Communications | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | data integration | - |
| dc.subject | hierarchical reinforcement learning | - |
| dc.subject | online learning | - |
| dc.subject | queue stability | - |
| dc.subject | two-timescale stochastic optimization | - |
| dc.title | Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TWC.2024.3522956 | - |
| dc.identifier.scopus | eid_2-s2.0-85216882657 | - |
| dc.identifier.volume | 24 | - |
| dc.identifier.issue | 3 | - |
| dc.identifier.spage | 2606 | - |
| dc.identifier.epage | 2619 | - |
| dc.identifier.eissn | 1558-2248 | - |
| dc.identifier.issnl | 1536-1276 | - |
