File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution

TitleSignificant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
Authors
Keywordsdeep reinforcement learning
Network monitoring
shortest path routing
significant sampling
Issue Date2020
Citation
IEEE Journal on Selected Areas in Communications, 2020, v. 38, n. 10, p. 2234-2248 How to Cite?
AbstractSignificant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal operation of the network, and in particular when it helps identify the shortest routes. Discovering the optimal sampling policy that specifies the optimal sampling frequency is referred to as the significant sampling problem. Modeling the problem as a Markov Decision process, this paper puts forth a deep reinforcement learning (DRL) approach to tackle the significant sampling problem. This approach is more flexible and general than prior approaches as it can accommodate a diverse set of network environments. Experimental results show that, 1) by following the objectives set in the prior work, our DRL approach can achieve performance comparable to their analytically derived policy $\phi '$ - unlike the prior approach, our approach is model-free and unaware of the underlying traffic model; 2) by appropriately modifying the objective functions, we obtain a new policy which addresses the never-sample problem of policy $\phi '$ , consequently reducing the overall cost; 3) our DRL approach works well under different stochastic variations of the network environment - it can provide good solutions under complex network environments where analytically tractable solutions are not feasible.
Persistent Identifierhttp://hdl.handle.net/10722/363361
ISSN
2023 Impact Factor: 13.8
2023 SCImago Journal Rankings: 8.707

 

DC FieldValueLanguage
dc.contributor.authorShao, Yulin-
dc.contributor.authorRezaee, Arman-
dc.contributor.authorLiew, Soung Chang-
dc.contributor.authorChan, Vincent W.S.-
dc.date.accessioned2025-10-10T07:46:16Z-
dc.date.available2025-10-10T07:46:16Z-
dc.date.issued2020-
dc.identifier.citationIEEE Journal on Selected Areas in Communications, 2020, v. 38, n. 10, p. 2234-2248-
dc.identifier.issn0733-8716-
dc.identifier.urihttp://hdl.handle.net/10722/363361-
dc.description.abstractSignificant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal operation of the network, and in particular when it helps identify the shortest routes. Discovering the optimal sampling policy that specifies the optimal sampling frequency is referred to as the significant sampling problem. Modeling the problem as a Markov Decision process, this paper puts forth a deep reinforcement learning (DRL) approach to tackle the significant sampling problem. This approach is more flexible and general than prior approaches as it can accommodate a diverse set of network environments. Experimental results show that, 1) by following the objectives set in the prior work, our DRL approach can achieve performance comparable to their analytically derived policy $\phi '$ - unlike the prior approach, our approach is model-free and unaware of the underlying traffic model; 2) by appropriately modifying the objective functions, we obtain a new policy which addresses the never-sample problem of policy $\phi '$ , consequently reducing the overall cost; 3) our DRL approach works well under different stochastic variations of the network environment - it can provide good solutions under complex network environments where analytically tractable solutions are not feasible.-
dc.languageeng-
dc.relation.ispartofIEEE Journal on Selected Areas in Communications-
dc.subjectdeep reinforcement learning-
dc.subjectNetwork monitoring-
dc.subjectshortest path routing-
dc.subjectsignificant sampling-
dc.titleSignificant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/JSAC.2020.3000364-
dc.identifier.scopuseid_2-s2.0-85086737988-
dc.identifier.volume38-
dc.identifier.issue10-
dc.identifier.spage2234-
dc.identifier.epage2248-
dc.identifier.eissn1558-0008-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats