Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-ho...Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-hoarders and either re-cached,eaten or larder-hoarded.Such seed movements can influence seedling recruitment,because only seeds remaining in caches are likely to germinate.Although the importance of scatter-hoarding rodents in the dispersal of western juniper seeds has recently been revealed,the level of pilfering that occurs after initial burial is unknown.Seed traits,soil moisture,and substrate can influence pilfering processes,but less is known about how pilfering varies among caches placed in open versus canopy microsites,or how cache discovery and removal varies among different canopy-types,tree versus shrub.We compared the removal of artificial caches between open and canopy microsites and between tree and shrub canopies at two sites in northeastern California during late spring and fall.We also used trail cameras at one site to monitor artificial cache removal,identify potential pilferers,and illuminate microsite use by scatter-hoarders.Removal of artificial caches was faster in open microsites at both sites during both seasons,and more caches were removed from shrub than tree canopies.California kangaroo rats were the species observed most on cameras,foraging most often in open microsites,which could explain the observed pilfering patterns.This is the first study to document pilfering of western juniper seeds,providing further evidence of the importance of scatter-hoarding rodent foraging behavior in understanding seedling recruitment processes in juniper woodlands.展开更多
The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for ...The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.展开更多
Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as cac...Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.展开更多
In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping atta...In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping attacks. One effective solution to this problem is random-cache, where the cache in a router randomly mimics a cache hit or a cache miss for each content request/probe. In this paper, we investigate the effectiveness of using multiple random-caches to protect cache privacy in a multi-path ICN. We propose models for characterizing the privacy of multi-path ICNs with random-caches, and analyze two different attack scenarios: 1) prefix-based attacks and 2) suffix-based attacks. Both homogeneous and heterogeneous caches are considered. Our analysis shows that in a multi-path ICN an adversary can potentially gain more privacy information by adopting prefix-based attacks. Furthermore, heterogeneous caches provide much better privacy protection than homogeneous ones under both attacks. The effect of different parameters on the privacy of multi-path random-caches is further investigated, and the comparison with its single-path counterpart is carried out based on numerical evaluations. The analysis and results in this paper provide insights in designing and evaluating multi-path ICNs when we take privacy into consideration.展开更多
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立...针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。展开更多
With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also...With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.展开更多
The rapid development of 5G/6G and AI enables an environment of Internet of Everything(IoE)which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay.Howeve...The rapid development of 5G/6G and AI enables an environment of Internet of Everything(IoE)which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay.However,these massive devices will lead to explosive traffic growth,which in turn cause great burden for the data transmission and content delivery.This challenge can be eased by sinking some critical content from cloud to edge.In this case,how to determine the critical content,where to sink and how to access the content correctly and efficiently become new challenges.This work focuses on establishing a highly efficient content delivery framework in the IoE environment.In particular,the IoE environment is re-constructed as an end-edge-cloud collaborative system,in which the concept of digital twin is applied to promote the collaboration.Based on the digital asset obtained by digital twin from end users,a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention(TPA)enabled Long Short-Term Memory(LSTM)model.Then,the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning(RL)technology.Finally,a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead.The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate,the average throughput,the successful content delivery rate and the average routing overhead.展开更多
As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Informatio...As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.展开更多
One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Netw...One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.展开更多
The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been conside...The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.展开更多
A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity...A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.展开更多
Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for Io...Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for IoT devices,which effectively reduces the delay for acquiring data.With the increasing number of IoT devices requesting for services,the spectrum resources are generally limited.In order to effectively meet the challenge of limited spectrum resources,the Non-Orthogonal Multiple Access(NOMA)is proposed to improve the transmission efficiency.In this paper,we consider the caching scenario in a NOMA-enabled MEC system.All the devices compete for the limited resources and tend to minimize their own cost.We formulate the caching problem,and the goal is to minimize the delay cost for each individual device subject to resource constraints.We reformulate the optimization as a non-cooperative game model.We prove the existence of Nash equilibrium(NE)solution in the game model.Then,we design the Game-based Cost-Efficient Edge Caching Algorithm(GCECA)to solve the problem.The effectiveness of our GCECA algorithm is validated by both parameter analysis and comparison experiments.展开更多
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
Edge caching has emerged as a promising application paradigm in 5G networks,and by building edge networks to cache content,it can alleviate the traffic load brought about by the rapid growth of Internet of Things(IoT)...Edge caching has emerged as a promising application paradigm in 5G networks,and by building edge networks to cache content,it can alleviate the traffic load brought about by the rapid growth of Internet of Things(IoT)services and applications.Due to the limitations of Edge Servers(ESs)and a large number of user demands,how to make the decision and utilize the resources of ESs are significant.In this paper,we aim to minimize the total system energy consumption in a heterogeneous network and formulate the content caching optimization problem as a Mixed Integer Non-Linear Programming(MINLP).To address the optimization problem,a Deep Q-Network(DQN)-based method is proposed to improve the overall performance of the system and reduce the backhaul traffic load.In addition,the DQN-based method can effectively solve the limitation of traditional reinforcement learning(RL)in complex scenarios.Simulation results show that the proposed DQN-based method can greatly outperform other benchmark methods,and significantly improve the cache hit rate and reduce the total system energy consumption in different scenarios.展开更多
文摘Scatter-hoarding rodents store seeds throughout their home ranges in superficially buried caches which,unlike seeds larder-hoarded in burrows,are difficult to defend.Cached seeds are often pilfered by other scatter-hoarders and either re-cached,eaten or larder-hoarded.Such seed movements can influence seedling recruitment,because only seeds remaining in caches are likely to germinate.Although the importance of scatter-hoarding rodents in the dispersal of western juniper seeds has recently been revealed,the level of pilfering that occurs after initial burial is unknown.Seed traits,soil moisture,and substrate can influence pilfering processes,but less is known about how pilfering varies among caches placed in open versus canopy microsites,or how cache discovery and removal varies among different canopy-types,tree versus shrub.We compared the removal of artificial caches between open and canopy microsites and between tree and shrub canopies at two sites in northeastern California during late spring and fall.We also used trail cameras at one site to monitor artificial cache removal,identify potential pilferers,and illuminate microsite use by scatter-hoarders.Removal of artificial caches was faster in open microsites at both sites during both seasons,and more caches were removed from shrub than tree canopies.California kangaroo rats were the species observed most on cameras,foraging most often in open microsites,which could explain the observed pilfering patterns.This is the first study to document pilfering of western juniper seeds,providing further evidence of the importance of scatter-hoarding rodent foraging behavior in understanding seedling recruitment processes in juniper woodlands.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.60621003,60873014.
文摘The widening gap between processor and memory speeds makes cache an important issue in the computer system design. Compared with work set of programs, cache resource is often rare. Therefore, it is very important for a computer system to use cache efficiently. Toward a dynamically reconfigurable cache proposed recently, DOOC (Data- Object Oriented Cache), this paper proposes a quantitative framework for analyzing the cache requirement of data-objects, which includes cache capacity, block size, associativity and coherence protocol. And a kind of graph coloring algorithm dealing with the competition between data-objects in the DOOC is proposed as well. Finally, we apply our approaches to the compiler management of DOOC. We test our approaches on both a single-core platform and a four-core platform. Compared with the traditional caches, the DOOC in both platforms achieves an average reduction of 44.98% and 49.69% in miss rate respectively. And its performance is very close to the ideal optimal cache.
文摘Content-centric network (CCN) is a new Inter- net architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with con- tent stores at the content level, which act as caches for fre- quently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support. In addition, as caches are inte- grated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hi- erarchy composed of CCN touters using a two-dimensional discrete-time Markov chain, and develop an efficient algo- rithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better un- derstand the caching mechanism of CCN.
基金The work was supported by the Young Scientists Fund of the National Natural Science Foundation of China under Grant No. 61502393 and the Aeronautical Science Foundation of China under Grant No. 2014ZD53049.
文摘In-network caching is a fundamental mechanism advocated by information-centric networks (ICNs) for efficient content delivery. However, this new mechanism also brings serious privacy risks due to cache snooping attacks. One effective solution to this problem is random-cache, where the cache in a router randomly mimics a cache hit or a cache miss for each content request/probe. In this paper, we investigate the effectiveness of using multiple random-caches to protect cache privacy in a multi-path ICN. We propose models for characterizing the privacy of multi-path ICNs with random-caches, and analyze two different attack scenarios: 1) prefix-based attacks and 2) suffix-based attacks. Both homogeneous and heterogeneous caches are considered. Our analysis shows that in a multi-path ICN an adversary can potentially gain more privacy information by adopting prefix-based attacks. Furthermore, heterogeneous caches provide much better privacy protection than homogeneous ones under both attacks. The effect of different parameters on the privacy of multi-path random-caches is further investigated, and the comparison with its single-path counterpart is carried out based on numerical evaluations. The analysis and results in this paper provide insights in designing and evaluating multi-path ICNs when we take privacy into consideration.
文摘针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。
基金supported by the National Key Research and Development Program of China under Grant 2020YFB1807700the National Natural Science Foundation of China(NSFC)under Grant(No.62201414,62201432)+2 种基金the Qinchuangyuan Project(OCYRCXM-2022-362)the Fundamental Research Funds for the Central Universities and the Innovation Fund of Xidian University under Grant YJSJ24017the Guangzhou Science and Technology Program under Grant 202201011732。
文摘With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability.
基金supported by the National Key Research and Development Program of China under Grant No.2019YFB1802800the National Natural Science Foundation of China under Grant No.62002055,62032013,61872073,62202247.
文摘The rapid development of 5G/6G and AI enables an environment of Internet of Everything(IoE)which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay.However,these massive devices will lead to explosive traffic growth,which in turn cause great burden for the data transmission and content delivery.This challenge can be eased by sinking some critical content from cloud to edge.In this case,how to determine the critical content,where to sink and how to access the content correctly and efficiently become new challenges.This work focuses on establishing a highly efficient content delivery framework in the IoE environment.In particular,the IoE environment is re-constructed as an end-edge-cloud collaborative system,in which the concept of digital twin is applied to promote the collaboration.Based on the digital asset obtained by digital twin from end users,a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention(TPA)enabled Long Short-Term Memory(LSTM)model.Then,the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning(RL)technology.Finally,a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead.The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate,the average throughput,the successful content delivery rate and the average routing overhead.
基金supported by the Key R&D Program of Anhui Province in 2020 under Grant No.202004a05020078China Environment for Network Innovations(CENI)under Grant No.2016-000052-73-01-000515.
文摘As users’access to the network has evolved into the acquisition of mass contents instead of IP addresses,the IP network architecture based on end-to-end communication cannot meet users’needs.Therefore,the Information-Centric Networking(ICN)came into being.From a technical point of view,ICN is a promising future network architecture.Researching and customizing a reasonable pricing mechanism plays a positive role in promoting the deployment of ICN.The current research on ICN pricing mechanism is focused on paid content.Therefore,we study an ICN pricing model for free content,which uses game theory based on Nash equilibrium to analysis.In this work,advertisers are considered,and an advertiser model is established to describe the economic interaction between advertisers and ICN entities.This solution can formulate the best pricing strategy for all ICN entities and maximize the benefits of each entity.Our extensive analysis and numerical results show that the proposed pricing framework is significantly better than existing solutions when it comes to free content.
文摘One of the challenges of Informationcentric Networking(ICN)is finding the optimal location for caching content and processing users’requests.In this paper,we address this challenge by leveraging Software-defined Networking(SDN)for efficient ICN management.To achieve this,we formulate the problem as a mixed-integer nonlinear programming(MINLP)model,incorporating caching,routing,and load balancing decisions.We explore two distinct scenarios to tackle the problem.Firstly,we solve the problem in an offline mode using the GAMS environment,assuming a stable network state to demonstrate the superior performance of the cacheenabled network compared to non-cache networks.Subsequently,we investigate the problem in an online mode where the network state dynamically changes over time.Given the computational complexity associated with MINLP,we propose the software-defined caching,routing,and load balancing(SDCRL)algorithm as an efficient and scalable solution.Our evaluation demonstrates that the SDCRL algorithm significantly reduces computational time while maintaining results that closely resemble those achieved by GAMS.
基金supported by Natural Science Foundation of China(Grant 61901070,61801065,62271096,61871062,U20A20157 and 62061007)in part by the Science and Technology Research Program of Chongqing Municipal Education Commission(Grant KJQN202000603 and KJQN201900611)+3 种基金in part by the Natural Science Foundation of Chongqing(Grant CSTB2022NSCQMSX0468,cstc2020jcyjzdxmX0024 and cstc2021jcyjmsxmX0892)in part by University Innovation Research Group of Chongqing(Grant CxQT20017)in part by Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)in part by the Chongqing Graduate Student Scientific Research Innovation Project(CYB22246)。
文摘The emergence of various new services has posed a huge challenge to the existing network architecture.To improve the network delay and backhaul pressure,caching popular contents at the edge of network has been considered as a feasible scheme.However,how to efficiently utilize the limited caching resources to cache diverse contents has been confirmed as a tough problem in the past decade.In this paper,considering the time-varying user requests and the heterogeneous content sizes,a user preference aware hierarchical cooperative caching strategy in edge-user caching architecture is proposed.We divide the caching strategy into three phases,that is,the content placement,the content delivery and the content update.In the content placement phase,a cooperative content placement algorithm for local content popularity is designed to cache contents proactively.In the content delivery phase,a cooperative delivery algorithm is proposed to deliver the cached contents.In the content update phase,a content update algorithm is proposed according to the popularity of the contents.Finally,the proposed caching strategy is validated using the MovieLens dataset,and the results reveal that the proposed strategy improves the delay performance by at least 35.3%compared with the other three benchmark strategies.
文摘A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy.
基金supported in part by Beijing Natural Science Foundation under Grant L232050in part by the Project of Cultivation for young topmotch Talents of Beijing Municipal Institutions under Grant BPHR202203225in part by Young Elite Scientists Sponsorship Program by BAST under Grant BYESS2023031.
文摘Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for IoT devices,which effectively reduces the delay for acquiring data.With the increasing number of IoT devices requesting for services,the spectrum resources are generally limited.In order to effectively meet the challenge of limited spectrum resources,the Non-Orthogonal Multiple Access(NOMA)is proposed to improve the transmission efficiency.In this paper,we consider the caching scenario in a NOMA-enabled MEC system.All the devices compete for the limited resources and tend to minimize their own cost.We formulate the caching problem,and the goal is to minimize the delay cost for each individual device subject to resource constraints.We reformulate the optimization as a non-cooperative game model.We prove the existence of Nash equilibrium(NE)solution in the game model.Then,we design the Game-based Cost-Efficient Edge Caching Algorithm(GCECA)to solve the problem.The effectiveness of our GCECA algorithm is validated by both parameter analysis and comparison experiments.
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
基金supported in part by the National Natural Science Foundation of China under Grant 62172255in part by the Outstanding Youth Program of Hubei Natural Science Foundation under Grant 2022CFA080the Wuhan AI Innovation Program(2022010702040056)。
文摘Edge caching has emerged as a promising application paradigm in 5G networks,and by building edge networks to cache content,it can alleviate the traffic load brought about by the rapid growth of Internet of Things(IoT)services and applications.Due to the limitations of Edge Servers(ESs)and a large number of user demands,how to make the decision and utilize the resources of ESs are significant.In this paper,we aim to minimize the total system energy consumption in a heterogeneous network and formulate the content caching optimization problem as a Mixed Integer Non-Linear Programming(MINLP).To address the optimization problem,a Deep Q-Network(DQN)-based method is proposed to improve the overall performance of the system and reduce the backhaul traffic load.In addition,the DQN-based method can effectively solve the limitation of traditional reinforcement learning(RL)in complex scenarios.Simulation results show that the proposed DQN-based method can greatly outperform other benchmark methods,and significantly improve the cache hit rate and reduce the total system energy consumption in different scenarios.