File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep Learning for Latent Events Forecasting in Content Caching Networks

TitleDeep Learning for Latent Events Forecasting in Content Caching Networks
Authors
Keywordsedge computing
Machine learning (ML)
neural networks
supervised learning
Issue Date2022
Citation
IEEE Transactions on Wireless Communications, 2022, v. 21, n. 1, p. 413-428 How to Cite?
AbstractA novel Twitter context aided content caching (TAC) framework is proposed for enhancing the caching efficiency by taking advantage of the legibility and massive volume of Twitter data. For the purpose of promoting the caching efficiency, three machine learning models are proposed to predict latent events and events popularity, utilizing collected Twitter data with geo-tags and geographic information of the adjacent base stations (BSs). Firstly, we propose a latent Dirichlet allocation (LDA) model for latent events forecasting because of the superiority of LDA model in natural language processing (NLP). Then, we conceive long short-term memory (LSTM) with skip-gram embedding approach and LSTM with continuous skip-gram-Geo-aware embedding approach for the events popularity forecasting. Furthermore, we associate the predict latent events and the popularity of the events with the caching strategy. Lastly, we propose a non-orthogonal multiple access (NOMA) based content transmission scheme. Extensive practical experiments demonstrate that: 1) the proposed TAC framework outperforms conventional caching framework and is capable of being employed in practical applications thanks to the associating ability with public interests; 2) the proposed LDA approach conserves superiority for natural language processing (NLP) in Twitter data; 3) the perplexity of the proposed skip-gram based LSTM is lower compared with conventional LDA approach; and 4) evaluation of the model demonstrates that the hit rates of tweets of the model vary from 50% to 65% and the hit rate of the caching contents is up to approximately 75% with smaller caching space compared to conventional algorithms. Simulation results also shows that the proposed NOMA-enabled caching scheme outperforms conventional least frequently used (LFU) scheme by 25%.
Persistent Identifierhttp://hdl.handle.net/10722/349583
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhong-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorZhou, Joey Tianyi-
dc.date.accessioned2024-10-17T06:59:30Z-
dc.date.available2024-10-17T06:59:30Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2022, v. 21, n. 1, p. 413-428-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/349583-
dc.description.abstractA novel Twitter context aided content caching (TAC) framework is proposed for enhancing the caching efficiency by taking advantage of the legibility and massive volume of Twitter data. For the purpose of promoting the caching efficiency, three machine learning models are proposed to predict latent events and events popularity, utilizing collected Twitter data with geo-tags and geographic information of the adjacent base stations (BSs). Firstly, we propose a latent Dirichlet allocation (LDA) model for latent events forecasting because of the superiority of LDA model in natural language processing (NLP). Then, we conceive long short-term memory (LSTM) with skip-gram embedding approach and LSTM with continuous skip-gram-Geo-aware embedding approach for the events popularity forecasting. Furthermore, we associate the predict latent events and the popularity of the events with the caching strategy. Lastly, we propose a non-orthogonal multiple access (NOMA) based content transmission scheme. Extensive practical experiments demonstrate that: 1) the proposed TAC framework outperforms conventional caching framework and is capable of being employed in practical applications thanks to the associating ability with public interests; 2) the proposed LDA approach conserves superiority for natural language processing (NLP) in Twitter data; 3) the perplexity of the proposed skip-gram based LSTM is lower compared with conventional LDA approach; and 4) evaluation of the model demonstrates that the hit rates of tweets of the model vary from 50% to 65% and the hit rate of the caching contents is up to approximately 75% with smaller caching space compared to conventional algorithms. Simulation results also shows that the proposed NOMA-enabled caching scheme outperforms conventional least frequently used (LFU) scheme by 25%.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.subjectedge computing-
dc.subjectMachine learning (ML)-
dc.subjectneural networks-
dc.subjectsupervised learning-
dc.titleDeep Learning for Latent Events Forecasting in Content Caching Networks-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TWC.2021.3096747-
dc.identifier.scopuseid_2-s2.0-85111599307-
dc.identifier.volume21-
dc.identifier.issue1-
dc.identifier.spage413-
dc.identifier.epage428-
dc.identifier.eissn1558-2248-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats