As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Informatio...As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.展开更多
The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been conside...The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.展开更多
One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Netw...One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.展开更多
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
In this paper,we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming.The aim is to reduce the average initial buffer delay and improve the q...In this paper,we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming.The aim is to reduce the average initial buffer delay and improve the quality of user experience.Considering the difference between global and local video popularities and the time-varying characteristics of video popularity,a two-stage caching scheme is proposed to push popular videos closer to users and minimize the average initial buffer delay.Based on both long-term content popularity and short-term content popularity,the proposed caching solution is decouple into the proactive cache stage and the cache update stage.In the proactive cache stage,we develop a proactive cache placement algorithm that can be executed in an off-peak period.In the cache update stage,we propose a reactive cache update algorithm to update the existing cache policy to minimize the buffer delay.Simulation results verify that the proposed caching algorithms can reduce the initial buffer delay efficiently.展开更多
Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be dep...Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be deployed and cached on edge servers to optimize the efficiency of content delivery,ultimately enhancing the quality of the user experience.However,due to the typical placement of edge devices and nodes at the network’s periphery,these components may face various potential fault tolerance challenges,including network instability,device failures,and resource constraints.Considering the dynamic nature ofMEC,making high-quality content caching decisions for real-time mobile applications,especially those sensitive to latency,by effectively utilizing mobility information,continues to be a significant challenge.In response to this challenge,this paper introduces FT-MAACC,a mobility-aware caching solution grounded in multi-agent deep reinforcement learning and equipped with fault tolerance mechanisms.This approach comprehensively integrates content adaptivity algorithms to evaluate the priority of highly user-adaptive cached content.Furthermore,it relies on collaborative caching strategies based onmulti-agent deep reinforcement learningmodels and establishes a fault-tolerancemodel to ensure the system’s reliability,availability,and persistence.Empirical results unequivocally demonstrate that FTMAACC outperforms its peer methods in cache hit rates and transmission latency.展开更多
Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile termin...Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.展开更多
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra...As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.展开更多
In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating c...In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating communication,computing,caching,and control(i4C)technologies.In this survey,we first give a snapshot of different aspects of the i4C,comprising background,motivation,leading technological enablers,potential applications,and use cases.Next,we describe different models of communication,computing,caching,and control(4C)to lay the foundation of the integration approach.We review current stateof-the-art research efforts related to the i4C,focusing on recent trends of both conventional and artificial intelligence(AI)-based integration approaches.We also highlight the need for intelligence in resources integration.Then,we discuss the integration of sensing and communication(ISAC)and classify the integration approaches into various classes.Finally,we propose open challenges and present future research directions for beyond 5G networks,such as 6G.展开更多
The upsurge of mobile multimedia traf-fic puts a heavy burden on the cellular network,and wireless caching has emerged as a powerful tech-nique to overcome the backhaul bottleneck and al-leviate the network burden.How...The upsurge of mobile multimedia traf-fic puts a heavy burden on the cellular network,and wireless caching has emerged as a powerful tech-nique to overcome the backhaul bottleneck and al-leviate the network burden.However,most previ-ous works ignored user mobility,thus not reaping the caching gain from user mobility and having lim-ited practical applications.In this paper,a mobility-aware caching strategy for the software-defined net-work(SDN)-based network is studied.Firstly,since typical mobile user(MU)has multiple opportunities to connect with the nearby MUs and Small-base stations(SBSs),the contact times between MUs as well as be-tween MU and SBSs are derived as the Poisson distri-bution and Gamma distribution.Secondly,we propose a two-tier cooperative caching strategy,where SBSs cache the rateless Fountain code encoded video blocks probabilistically and nonrepeatedly while MUs just store the whole encoded video received last time.The corresponding four-stage transmission process is ana-lyzed,where the key intermediate step is the deriva-tion of the service and the failing probabilities of each transmission manner.Finally,we derive the success-ful offloading rate and average data offloading ra-tio(ADOR)as performance metrics.A system op-timization problem based on ADOR is formulated,and two solutions are proposed,namely,derivativebased solution(DB-Solution)and long-tail distribution approximation(LTD-Approximation).Simulation results demonstrate that the effectiveness of LTDApproximation is similar to the DB-Solution,and the proposed caching strategy can achieve quasi-optimal performance compared with other contrast schemes.展开更多
In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite netw...In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite networks.Setting up cache space at any node enables users to access data nearby,thus relieving the processing pressure on the servers.However,the existing caching strategies still suffer from the lack of global planning of cache contents and low utilization of cache resources due to the lack of fine-grained division of cache contents.To address the issues mentioned,a cooperative caching strategy(CSTL)for remote sensing satellite networks based on a two-layer caching model is proposed.The two-layer caching model is constructed by setting up separate cache spaces in the satellite network and the ground station.Probabilistic caching of popular contents in the region at the ground station to reduce the access delay of users.A content classification method based on hierarchical division is proposed in the satellite network,and differential probabilistic caching is employed for different levels of content.The cached content is also dynamically adjusted by analyzing the subsequent changes in the popularity of the cached content.In the two-layer caching model,ground stations and satellite networks collaboratively cache to achieve global planning of cache contents,rationalize the utilization of cache resources,and reduce the propagation delay of remote sensing data.Simulation results show that the CSTL strategy not only has a high cache hit ratio compared with other caching strategies but also effectively reduces user request delay and server load,which satisfies the timeliness requirement of remote sensing data transmission.展开更多
In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules...In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules.It indicates that immoderately caching content would significantly change the interference distribution in CSCN,which may degrade the network area spectral efficiency(ASE).Meanwhile,it is shown that content-based rule outperforms the distance-based rule in terms of network ASE only when small cell base stations(BSs)are sparsely deployed with low decoding thresholds.Moreover,it is proved that network ASE under distance-based user association serves as the upper bound of that under content-based rule in dense BS regime.To enable more spectrum-efficient user association in dense CSCN,we further optimize network ASE by designing a probabilistic content retrieving strategy based on distance-based rule.With the optimized retrieving probability,network ASE could be substantially enhanced and even increase with the growing BS density in dense BS regime.展开更多
In coded caching,users cache pieces of files under a specific arrangement so that the server can satisfy their requests simultaneously in the broadcast scenario via e Xclusive OR(XOR)operation and therefore reduce the...In coded caching,users cache pieces of files under a specific arrangement so that the server can satisfy their requests simultaneously in the broadcast scenario via e Xclusive OR(XOR)operation and therefore reduce the amount of transmission data.However,when users’locations are changing,the uploading of caching information is frequent and extensive that the traffic increase outweighed the traffic reduction that the traditional coded caching achieved.In this paper,we propose mobile coded caching schemes to reduce network traffic in mobility scenarios,which achieve a lower cost on caching information uploading.In the cache placement phase,the proposed scheme first constructs caching patterns,and then assigns the caching patterns to users according to the graph coloring method and four color theorem in our centralized cache placement algorithm or randomly in our decentralized cache placement algorithm.Then users are divided into groups based on their caching patterns.As a benefit,when user movements occur,the types of caching pattern,rather than the whole caching information of which file pieces are cached,are uploaded.In the content delivery phase,XOR coded caching messages are reconstructed.Transmission data volume is derived to measure the performance of the proposed schemes.Numerical results show that the proposed schemes achieve great improvement in traffic offloading.展开更多
Concerned with the surge of contentcentric applications,it is challenging to balance network traffic and cater to low-delay requirements.Hierarchical caching architecture of both edge network(EN)and core network(CN)em...Concerned with the surge of contentcentric applications,it is challenging to balance network traffic and cater to low-delay requirements.Hierarchical caching architecture of both edge network(EN)and core network(CN)emerges and leverages caching resources to reduce the delivery delay of contents.Most previous work takes an impractical assumption to treat the CN as a content provider,which neglects the collaboration by intermediate CN caches.Most importantly,it is still necessary to thoroughly study the tradeoff between CN delay and edge delay for files delivery so as to minimize the overall delivery delay across the network.In this paper,we consider a hierarchical caching network with distributed CN nodes and edge nodes,where cooperative transmission is enabled for edge nodes to transmit multi-files simultaneously.This poses a joint optimization problem of hierarchical file caching and fetching to minimize the overall delivery delay of requests.Since the problem is NP-hard,we decompose the original problem and design an iterative algorithm to address it.Numerical results validate that the proposed scheme can find a balanced solution between lowering edge delay by utilizing coordinated CN caching and lowering CN delay by solely relying on edge caching.展开更多
Cell-free Wireless Heterogeneous Networks(HetNets)have emerged as a technological alternative for conventional cellular networks.In this paper,we study the spatially correlative caching strategy,the energy analysis,an...Cell-free Wireless Heterogeneous Networks(HetNets)have emerged as a technological alternative for conventional cellular networks.In this paper,we study the spatially correlative caching strategy,the energy analysis,and the impact of parameter β on the total energy cost of the cell-free wireless HetNets with Access Points distributed by Beta Ginibre Point Process(β-GPP).We derive the approximate expression of Successful Delivery Probability(SDP)based on the Signal-to-Interference-plus-Noise Ratio coverage model.From both analytical and simulation results,it is shown that the proposed caching model based on β-GPP placement,which jointly takes into account path loss,fading,and interference,can closely simulate the caching performance of the cell-free HetNets in terms of SDP.By guaranteeing the outage probability constraints,the analytical expression of the uplink energy cost is also derived.Another conclusion is that with AP locations modeled by β-GPP,the power consumption is not sensitive to β,but is sensitive to the dimension of the kernel function;hence β is less restrictive,and only the truncation of the Ginibre kernel has to be appropriately modified.These findings are new compared with the existing literature where the nodes are commonly assumed to be of Poisson Point Process,Matern Hard-Core Process,or Poisson Cluster Process deployment in cell-free systems.展开更多
为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性...为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性,并大幅减少冷启动时间。DFS-Cache包括基于虚拟内存重映射的缓存碎片整理机制和基于生存时间(TTL)的缓存空间管理策略。前者基于NVM可被内存控制器直接寻址的特性,动态修改虚拟地址和物理地址之间的映射关系,实现零拷贝的内存碎片整理;后者是一种冷热分离的分组管理策略,借助重映射的缓存碎片整理机制,提升缓存空间的管理效率。实验采用真实的Intel傲腾持久性内存设备,对比商用的分布式文件系统MooseFS和GlusterFS,采用Fio和Filebench等标准测试程序,DFS-Cache最高能提升5.73倍和1.89倍的系统吞吐量。展开更多
Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskde...Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.展开更多
An Information-Centric Network(ICN)provides a promising paradigm for the upcoming internet architecture,which will struggle with steady growth in data and changes in accessmodels.Various ICN architectures have been de...An Information-Centric Network(ICN)provides a promising paradigm for the upcoming internet architecture,which will struggle with steady growth in data and changes in accessmodels.Various ICN architectures have been designed,including Named Data Networking(NDN),which is designed around content delivery instead of hosts.As data is the central part of the network.Therefore,NDN was developed to get rid of the dependency on IP addresses and provide content effectively.Mobility is one of the major research dimensions for this upcoming internet architecture.Some research has been carried out to solve the mobility issues,but it still has problems like handover delay and packet loss ratio during real-time video streaming in the case of consumer and producer mobility.To solve this issue,an efficient hierarchical Cluster Base Proactive Caching for Device Mobility Management(CB-PC-DMM)in NDN Vehicular Networks(NDN-VN)is proposed,through which the consumer receives the contents proactively after handover during the mobility of the consumer.When a consumer moves to the next destination,a handover interest is sent to the connected router,then the router multicasts the consumer’s desired data packet to the next hop of neighboring routers.Thus,once the handover process is completed,consumers can easily get the content to the newly connected router.A CB-PCDMM in NDN-VN is proposed that improves the packet delivery ratio and reduces the handover delay aswell as cluster overhead.Moreover,the intra and inter-domain handover handling procedures in CB-PC-DMM for NDN-VN have been described.For the validation of our proposed scheme,MATLAB simulations are conducted.The simulation results show that our proposed scheme reduces the handover delay and increases the consumer’s interest satisfaction ratio.The proposed scheme is compared with the existing stateof-the-art schemes,and the total percentage of handover delays is decreased by up to 0.1632%,0.3267%,2.3437%,2.3255%,and 3.7313%at the mobility speeds of 5 m/s,10 m/s,15 m/s,20 m/s,and 25 m/s,and the efficiency of the packet delivery ratio is improved by up to 1.2048%,5.0632%,6.4935%,6.943%,and 8.4507%.Furthermore,the simulation results of our proposed scheme show better efficiency in terms of Packet Delivery Ratio(PDR)from 0.071 to 0.077 and a decrease in the handover delay from 0.1334 to 0.129.展开更多
The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.H...The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.展开更多
基金supported by the Key R&D Program of Anhui Province in 2020 under Grant No.202004a05020078China Environment for Network Innovations(CENI)under Grant No.2016-000052-73-01-000515.
文摘As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.
基金supported by Natural Science Foundation of China(Grant 61901070,61801065,62271096,61871062,U20A20157 and 62061007)in part by the Science and Technology Research Program of Chongqing Municipal Education Commission(Grant KJQN202000603 and KJQN201900611)+3 种基金in part by the Natural Science Foundation of Chongqing(Grant CSTB2022NSCQMSX0468,cstc2020jcyjzdxmX0024 and cstc2021jcyjmsxmX0892)in part by University Innovation Research Group of Chongqing(Grant CxQT20017)in part by Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)in part by the Chongqing Graduate Student Scientific Research Innovation Project(CYB22246)。
文摘The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.
文摘One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
基金the National Natural Science Foundation of China under grants 61901078,61871062,and U20A20157in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)under grant 2021FNA04008+5 种基金in part by the China Postdoctoral Science Foundation under grant 2022MD713692in part by the Chongqing Postdoctoral Science Special Foundation under grant 2021XM2018in part by the Natural Science Foundation of Chongqing under grant cstc2020jcyj-zdxmX0024in part by University Innovation Research Group of Chongqing under grant CXQT20017in part by the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant KJQN202000626in part by the Youth Innovation Group Support Program of ICE Discipline of CQUPT under grant SCIE-QN-2022-04.
文摘In this paper,we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming.The aim is to reduce the average initial buffer delay and improve the quality of user experience.Considering the difference between global and local video popularities and the time-varying characteristics of video popularity,a two-stage caching scheme is proposed to push popular videos closer to users and minimize the average initial buffer delay.Based on both long-term content popularity and short-term content popularity,the proposed caching solution is decouple into the proactive cache stage and the cache update stage.In the proactive cache stage,we develop a proactive cache placement algorithm that can be executed in an off-peak period.In the cache update stage,we propose a reactive cache update algorithm to update the existing cache policy to minimize the buffer delay.Simulation results verify that the proposed caching algorithms can reduce the initial buffer delay efficiently.
基金supported by the Innovation Fund Project of Jiangxi Normal University(YJS2022065)the Domestic Visiting Program of Jiangxi Normal University.
文摘Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be deployed and cached on edge servers to optimize the efficiency of content delivery,ultimately enhancing the quality of the user experience.However,due to the typical placement of edge devices and nodes at the network’s periphery,these components may face various potential fault tolerance challenges,including network instability,device failures,and resource constraints.Considering the dynamic nature ofMEC,making high-quality content caching decisions for real-time mobile applications,especially those sensitive to latency,by effectively utilizing mobility information,continues to be a significant challenge.In response to this challenge,this paper introduces FT-MAACC,a mobility-aware caching solution grounded in multi-agent deep reinforcement learning and equipped with fault tolerance mechanisms.This approach comprehensively integrates content adaptivity algorithms to evaluate the priority of highly user-adaptive cached content.Furthermore,it relies on collaborative caching strategies based onmulti-agent deep reinforcement learningmodels and establishes a fault-tolerancemodel to ensure the system’s reliability,availability,and persistence.Empirical results unequivocally demonstrate that FTMAACC outperforms its peer methods in cache hit rates and transmission latency.
文摘Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.
基金supported by the National Key Research and Development Program of China 2021YFB2900504,2020YFB1807900 and 2020YFB1807903by the National Science Foundation of China under Grant 62271062,62071063。
文摘As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.
基金supported in part by National Key R&D Program of China(2019YFE0196400)Key Research and Development Program of Shaanxi(2022KWZ09)+4 种基金National Natural Science Foundation of China(61771358,61901317,62071352)Fundamental Research Funds for the Central Universities(JB190104)Joint Education Project between China and Central-Eastern European Countries(202005)the 111 Project(B08038)。
文摘In recent years,the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks.Such challenges can be potentially overcome by integrating communication,computing,caching,and control(i4C)technologies.In this survey,we first give a snapshot of different aspects of the i4C,comprising background,motivation,leading technological enablers,potential applications,and use cases.Next,we describe different models of communication,computing,caching,and control(4C)to lay the foundation of the integration approach.We review current stateof-the-art research efforts related to the i4C,focusing on recent trends of both conventional and artificial intelligence(AI)-based integration approaches.We also highlight the need for intelligence in resources integration.Then,we discuss the integration of sensing and communication(ISAC)and classify the integration approaches into various classes.Finally,we propose open challenges and present future research directions for beyond 5G networks,such as 6G.
基金the Beijing Natural Science Foundation under Grant NO.L222039.
文摘The upsurge of mobile multimedia traf-fic puts a heavy burden on the cellular network,and wireless caching has emerged as a powerful tech-nique to overcome the backhaul bottleneck and al-leviate the network burden.However,most previ-ous works ignored user mobility,thus not reaping the caching gain from user mobility and having lim-ited practical applications.In this paper,a mobility-aware caching strategy for the software-defined net-work(SDN)-based network is studied.Firstly,since typical mobile user(MU)has multiple opportunities to connect with the nearby MUs and Small-base stations(SBSs),the contact times between MUs as well as be-tween MU and SBSs are derived as the Poisson distri-bution and Gamma distribution.Secondly,we propose a two-tier cooperative caching strategy,where SBSs cache the rateless Fountain code encoded video blocks probabilistically and nonrepeatedly while MUs just store the whole encoded video received last time.The corresponding four-stage transmission process is ana-lyzed,where the key intermediate step is the deriva-tion of the service and the failing probabilities of each transmission manner.Finally,we derive the success-ful offloading rate and average data offloading ra-tio(ADOR)as performance metrics.A system op-timization problem based on ADOR is formulated,and two solutions are proposed,namely,derivativebased solution(DB-Solution)and long-tail distribution approximation(LTD-Approximation).Simulation results demonstrate that the effectiveness of LTDApproximation is similar to the DB-Solution,and the proposed caching strategy can achieve quasi-optimal performance compared with other contrast schemes.
基金This research was funded by the National Natural Science Foundation of China(No.U21A20451)the Science and Technology Planning Project of Jilin Province(No.20200401105GX)the China University Industry University Research Innovation Fund(No.2021FNA01003).
文摘In Information Centric Networking(ICN)where content is the object of exchange,in-network caching is a unique functional feature with the ability to handle data storage and distribution in remote sensing satellite networks.Setting up cache space at any node enables users to access data nearby,thus relieving the processing pressure on the servers.However,the existing caching strategies still suffer from the lack of global planning of cache contents and low utilization of cache resources due to the lack of fine-grained division of cache contents.To address the issues mentioned,a cooperative caching strategy(CSTL)for remote sensing satellite networks based on a two-layer caching model is proposed.The two-layer caching model is constructed by setting up separate cache spaces in the satellite network and the ground station.Probabilistic caching of popular contents in the region at the ground station to reduce the access delay of users.A content classification method based on hierarchical division is proposed in the satellite network,and differential probabilistic caching is employed for different levels of content.The cached content is also dynamically adjusted by analyzing the subsequent changes in the popularity of the cached content.In the two-layer caching model,ground stations and satellite networks collaboratively cache to achieve global planning of cache contents,rationalize the utilization of cache resources,and reduce the propagation delay of remote sensing data.Simulation results show that the CSTL strategy not only has a high cache hit ratio compared with other caching strategies but also effectively reduces user request delay and server load,which satisfies the timeliness requirement of remote sensing data transmission.
基金supported in part by Natural Science Foundation of China(Grant No.62121001,62171344,61931005)in part by Young Elite Scientists Sponsorship Program by CAST+2 种基金in part by Key Industry Innovation Chain of Shaanxi(Grant No.2022ZDLGY0501,2022ZDLGY05-06)in part by Key Research and Development Program of Shannxi(Grant No.2021KWZ-05)in part by The Major Key Project of PCL(PCL2021A15)。
文摘In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules.It indicates that immoderately caching content would significantly change the interference distribution in CSCN,which may degrade the network area spectral efficiency(ASE).Meanwhile,it is shown that content-based rule outperforms the distance-based rule in terms of network ASE only when small cell base stations(BSs)are sparsely deployed with low decoding thresholds.Moreover,it is proved that network ASE under distance-based user association serves as the upper bound of that under content-based rule in dense BS regime.To enable more spectrum-efficient user association in dense CSCN,we further optimize network ASE by designing a probabilistic content retrieving strategy based on distance-based rule.With the optimized retrieving probability,network ASE could be substantially enhanced and even increase with the growing BS density in dense BS regime.
基金supported by National Natural Science Foundation of China(No.61971060)。
文摘In coded caching,users cache pieces of files under a specific arrangement so that the server can satisfy their requests simultaneously in the broadcast scenario via e Xclusive OR(XOR)operation and therefore reduce the amount of transmission data.However,when users’locations are changing,the uploading of caching information is frequent and extensive that the traffic increase outweighed the traffic reduction that the traditional coded caching achieved.In this paper,we propose mobile coded caching schemes to reduce network traffic in mobility scenarios,which achieve a lower cost on caching information uploading.In the cache placement phase,the proposed scheme first constructs caching patterns,and then assigns the caching patterns to users according to the graph coloring method and four color theorem in our centralized cache placement algorithm or randomly in our decentralized cache placement algorithm.Then users are divided into groups based on their caching patterns.As a benefit,when user movements occur,the types of caching pattern,rather than the whole caching information of which file pieces are cached,are uploaded.In the content delivery phase,XOR coded caching messages are reconstructed.Transmission data volume is derived to measure the performance of the proposed schemes.Numerical results show that the proposed schemes achieve great improvement in traffic offloading.
基金supported by National Key R&D Program of China under grant No.2021YFB2900200China Postdoctoral Science Foundation under grant No.2022M713475。
文摘Concerned with the surge of contentcentric applications,it is challenging to balance network traffic and cater to low-delay requirements.Hierarchical caching architecture of both edge network(EN)and core network(CN)emerges and leverages caching resources to reduce the delivery delay of contents.Most previous work takes an impractical assumption to treat the CN as a content provider,which neglects the collaboration by intermediate CN caches.Most importantly,it is still necessary to thoroughly study the tradeoff between CN delay and edge delay for files delivery so as to minimize the overall delivery delay across the network.In this paper,we consider a hierarchical caching network with distributed CN nodes and edge nodes,where cooperative transmission is enabled for edge nodes to transmit multi-files simultaneously.This poses a joint optimization problem of hierarchical file caching and fetching to minimize the overall delivery delay of requests.Since the problem is NP-hard,we decompose the original problem and design an iterative algorithm to address it.Numerical results validate that the proposed scheme can find a balanced solution between lowering edge delay by utilizing coordinated CN caching and lowering CN delay by solely relying on edge caching.
基金supported in part by the National Natural Science Foundation of China(NSFC)under the grant number 61901075the Natural Science Foundation of Chongqing,China,under the grant number cstc2019jcyj-msxmX0602+1 种基金Chongqing Basic and Cutting edge Project under the grant number cstc2018jcyjAX0507Chongqing University of Posts and Telecommunications Doctoral Candidates High-end Talent Training Project(No.BYJS2017001).
文摘Cell-free Wireless Heterogeneous Networks(HetNets)have emerged as a technological alternative for conventional cellular networks.In this paper,we study the spatially correlative caching strategy,the energy analysis,and the impact of parameter β on the total energy cost of the cell-free wireless HetNets with Access Points distributed by Beta Ginibre Point Process(β-GPP).We derive the approximate expression of Successful Delivery Probability(SDP)based on the Signal-to-Interference-plus-Noise Ratio coverage model.From both analytical and simulation results,it is shown that the proposed caching model based on β-GPP placement,which jointly takes into account path loss,fading,and interference,can closely simulate the caching performance of the cell-free HetNets in terms of SDP.By guaranteeing the outage probability constraints,the analytical expression of the uplink energy cost is also derived.Another conclusion is that with AP locations modeled by β-GPP,the power consumption is not sensitive to β,but is sensitive to the dimension of the kernel function;hence β is less restrictive,and only the truncation of the Ginibre kernel has to be appropriately modified.These findings are new compared with the existing literature where the nodes are commonly assumed to be of Poisson Point Process,Matern Hard-Core Process,or Poisson Cluster Process deployment in cell-free systems.
文摘为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性,并大幅减少冷启动时间。DFS-Cache包括基于虚拟内存重映射的缓存碎片整理机制和基于生存时间(TTL)的缓存空间管理策略。前者基于NVM可被内存控制器直接寻址的特性,动态修改虚拟地址和物理地址之间的映射关系,实现零拷贝的内存碎片整理;后者是一种冷热分离的分组管理策略,借助重映射的缓存碎片整理机制,提升缓存空间的管理效率。实验采用真实的Intel傲腾持久性内存设备,对比商用的分布式文件系统MooseFS和GlusterFS,采用Fio和Filebench等标准测试程序,DFS-Cache最高能提升5.73倍和1.89倍的系统吞吐量。
基金supported by theCommunication Soft Science Program of Ministry of Industry and Information Technology of China (No.2022-R-43)the Natural Science Basic Research Program of Shaanxi (No.2021JQ-719)Graduate Innovation Fund of Xi’an University of Posts and Telecommunications (No.CXJJZL2021014).
文摘Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.
基金This work was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2023-2018-0-01431)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘An Information-Centric Network(ICN)provides a promising paradigm for the upcoming internet architecture,which will struggle with steady growth in data and changes in accessmodels.Various ICN architectures have been designed,including Named Data Networking(NDN),which is designed around content delivery instead of hosts.As data is the central part of the network.Therefore,NDN was developed to get rid of the dependency on IP addresses and provide content effectively.Mobility is one of the major research dimensions for this upcoming internet architecture.Some research has been carried out to solve the mobility issues,but it still has problems like handover delay and packet loss ratio during real-time video streaming in the case of consumer and producer mobility.To solve this issue,an efficient hierarchical Cluster Base Proactive Caching for Device Mobility Management(CB-PC-DMM)in NDN Vehicular Networks(NDN-VN)is proposed,through which the consumer receives the contents proactively after handover during the mobility of the consumer.When a consumer moves to the next destination,a handover interest is sent to the connected router,then the router multicasts the consumer’s desired data packet to the next hop of neighboring routers.Thus,once the handover process is completed,consumers can easily get the content to the newly connected router.A CB-PCDMM in NDN-VN is proposed that improves the packet delivery ratio and reduces the handover delay aswell as cluster overhead.Moreover,the intra and inter-domain handover handling procedures in CB-PC-DMM for NDN-VN have been described.For the validation of our proposed scheme,MATLAB simulations are conducted.The simulation results show that our proposed scheme reduces the handover delay and increases the consumer’s interest satisfaction ratio.The proposed scheme is compared with the existing stateof-the-art schemes,and the total percentage of handover delays is decreased by up to 0.1632%,0.3267%,2.3437%,2.3255%,and 3.7313%at the mobility speeds of 5 m/s,10 m/s,15 m/s,20 m/s,and 25 m/s,and the efficiency of the packet delivery ratio is improved by up to 1.2048%,5.0632%,6.4935%,6.943%,and 8.4507%.Furthermore,the simulation results of our proposed scheme show better efficiency in terms of Packet Delivery Ratio(PDR)from 0.071 to 0.077 and a decrease in the handover delay from 0.1334 to 0.129.
基金supported by the National Natural Science Foundation of China(62231020,62101401)the Youth Innovation Team of Shaanxi Universities。
文摘The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.