Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of...Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.展开更多
Network coding has been proved to be an effective technique in improving the performance of data broadcast systems because clients requesting different data items can be served simultaneously in one broadcast. Previou...Network coding has been proved to be an effective technique in improving the performance of data broadcast systems because clients requesting different data items can be served simultaneously in one broadcast. Previous studies showed that its efficiency is highly related to the content of clients' cache. However, existing data broadcast systems do not take network coding information into account when making cache replacement decisions. In this paper, we propose two networks coding-aware cache replacement policies called DLRU and DLRU-CP to supplement network coding assisted data broadcast in on-demand broadcast environments. In DLRU, both data access and decoding contribution are taken into account to make replacement decisions. DLRU-CP is based on DLRU but allows clients to retrieve decodable data items that have not been requested yet. The performance gain of our proposed cache replacement policies over traditional cache replacement policy is shown in the simulation results, which demonstrate conclusively that the proposed policies can effectively reduce the overall response time.展开更多
Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cach...Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cache replacement policies,thereby introducing performance variability in the application.To improve the accuracy of reuse of cache blocks in the presence of hardware prefetching,we propose Prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC).PAIC is designed with separate predictors for prefetch and demand requests,and uses machine learning to optimize reuse prediction in the presence of prefetching.By distinguishing reuse predictions for prefetch and demand requests,PAIC can better combine the performance benefits from prefetching and replacement policies.We evaluate PAIC on a set of 27 memory-intensive programs from the SPEC 2006 and SPEC 2017.Under single-core configuration,PAIC improves performance over Least Recently Used(LRU)replacement policy by 37.22%,compared with improvements of 32.93%for Signature-based Hit Predictor(SHiP),34.56%for Hawkeye,and 34.43%for Glider.Under the four-core configuration,PAIC improves performance over LRU by 20.99%,versus 13.23%for SHiP,17.89%for Hawkeye and 15.50%for Glider.展开更多
Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end ...Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively.展开更多
Spark is a distributed data processing framework based on memory.Memory allocation is a focus question of Spark research.A good memory allocation scheme can effectively improve the efficiency of task execution and mem...Spark is a distributed data processing framework based on memory.Memory allocation is a focus question of Spark research.A good memory allocation scheme can effectively improve the efficiency of task execution and memory resource utilization of the Spark.Aiming at the memory allocation problem in the Spark2.x version,this paper optimizes the memory allocation strategy by analyzing the Spark memory model,the existing cache replacement algorithms and the memory allocation methods,which is on the basis of minimizing the storage area and allocating the execution area according to the demand.It mainly including two parts:cache replacement optimization and memory allocation optimization.Firstly,in the storage area,the cache replacement algorithm is optimized according to the characteristics of RDD Partition,which is combined with PCA dimension.In this section,the four features of RDD Partition are selected.When the RDD cache is replaced,only two most important features are selected by PCA dimension reduction method each time,thereby ensuring the generalization of the cache replacement strategy.Secondly,the memory allocation strategy of the execution area is optimized according to the memory requirement of Task and the memory space of storage area.In this paper,a series of experiments in Spark on Yarn mode are carried out to verify the effectiveness of the optimization algorithm and improve the cluster performance.展开更多
Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve th...Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.展开更多
Network processing in the current Internet is at the entirety of the data packet,which is problematic when encountering network congestion.The newly proposed Internet service named Qualitative Communication changes th...Network processing in the current Internet is at the entirety of the data packet,which is problematic when encountering network congestion.The newly proposed Internet service named Qualitative Communication changes the network processing paradigm to an even finer granularity,namely chunk level,which obsoletes many existing networking policies and schemes,especially the caching algorithms and cache replacement policies that have been extensively explored in Web Caching,Content Delivery Networks(CDN)or Information-Centric Networks(ICN).This paper outlines all the new factors that are brought by random linear network coding-based Qualitative Communication and proves the importance and necessity of considering them.A novel metric is proposed by taking these new factors into consideration.An optimization problem is formulated to maximize the metric value of all retained chunks in the local storage of network nodes under the constraint of storage limit.A cache replacement scheme that obtains the optimal result in a recursive manner is proposed correspondingly.With the help of the introduced intelligent cache replacement algorithm,the performance evaluations show remarkably reduced end-to-end latency compared to the existing schemes in various network scenarios.展开更多
In this work,we employ the cache-enabled UAV to provide context information delivery to end devices that make timely and intelligent decisions.Different from the traditional network traffic,context information varies ...In this work,we employ the cache-enabled UAV to provide context information delivery to end devices that make timely and intelligent decisions.Different from the traditional network traffic,context information varies with time and brings in the ageconstrained requirement.The cached content items should be refreshed timely based on the age status to guarantee the freshness of user-received contents,which however consumes additional transmission resources.The traditional cache methods separate the caching and the transmitting,which are not suitable for the dynamic context information.We jointly design the cache replacing and content delivery based on both the user requests and the content dynamics to maximize the offloaded traffic from the ground network.The problem is formulated based on the Markov Decision Process(MDP).A sufficient condition of cache replacing is found in closed form,whereby a dynamic cache replacing and content delivery scheme is proposed based on the Deep Q-Network(DQN).Extensive simulations have been conducted.Compared with the conventional popularity-based and the modified Least Frequently Used(i.e.,LFU-dynamic)schemes,the UAV can offload around 30%traffic from the ground network by utilizing the proposed scheme in the urban scenario,according to the simulation results.展开更多
With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive servi...With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive services.To solve these problems,on basis of vehicle caching network,we propose an edge colla-borative caching scheme.Road side unit(RSU)and mobile edge computing(MEC)are used to collect vehicle information,predict and cache popular content,thereby provide low-latency content delivery services.However,the storage capa-city of a single RSU severely limits the edge caching performance and cannot handle intensive content requests at the same time.Through content sharing,col-laborative caching can relieve the storage burden on caching servers.Therefore,we integrate RSU and collaborative caching to build a MEC-assisted vehicle edge collaborative caching(MVECC)scheme,so as to realize the collaborative caching among cloud,edge and vehicle.MVECC uses deep reinforcement learning to pre-dict what needs to be cached on RSU,which enables RSUs to cache more popular content.In addition,MVECC also introduces a mobility-aware caching replace-ment scheme at the edge network to reduce redundant cache and improving cache efficiency,which allows RSU to dynamically replace the cached content in response to the mobility of vehicles.The simulation results show that the pro-posed MVECC scheme can improve cache performance in terms of energy cost and content hit rate.展开更多
Due to the proliferation of Internet and Intranet,the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users.In the...Due to the proliferation of Internet and Intranet,the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users.In the distributed storage systems,a row can be directly accessed using a row key.We concentrate on a problem of efficient processing of queries whose predicate is on a column but not a row key.In this paper,we present a cache management technique,called DICE which maintains query results of range queries to support the next range queries.To accelerate the search time of the cached query results,we use modified Interval Ski Lists.In addition,we devise a novel cache replacement policy since DICE maintains an interval rather than a data item.Since our cache replacement policy considers the properties of intervals,our proposed technique is more efficient than traditional buffer replacement algorithms.Our experimental result demonstrates the efficiency of our proposed technique.展开更多
One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategi...One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies.展开更多
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network cachin...In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.展开更多
基金This research was funded by the National Natural Science Foundation of China(No.61862046)the Inner Mongolia Natural Science Foundation of China under Grant No.2018MS06024+2 种基金the Research Project of Higher Education School of Inner Mongolia Autonomous Region under Grant NJZY18010the Inner Mongolia Autonomous Region Science and Technology Achievements Transformation Project(No.CGZH2018124)the CERNET Innovation Project under Grant No.NGII20180626.
文摘Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.
基金Sponsored by the Research Grants Council of the Hong Kong Special Administrative Region,China ( Grant No. CityU 7002702)the Social Science Foundation from the Ministry of Education,China ( Grant No. 10YJC630021 )the National Natural Science Foundation of China ( Grant No.71202120)
文摘Network coding has been proved to be an effective technique in improving the performance of data broadcast systems because clients requesting different data items can be served simultaneously in one broadcast. Previous studies showed that its efficiency is highly related to the content of clients' cache. However, existing data broadcast systems do not take network coding information into account when making cache replacement decisions. In this paper, we propose two networks coding-aware cache replacement policies called DLRU and DLRU-CP to supplement network coding assisted data broadcast in on-demand broadcast environments. In DLRU, both data access and decoding contribution are taken into account to make replacement decisions. DLRU-CP is based on DLRU but allows clients to retrieve decodable data items that have not been requested yet. The performance gain of our proposed cache replacement policies over traditional cache replacement policy is shown in the simulation results, which demonstrate conclusively that the proposed policies can effectively reduce the overall response time.
基金supported by the Natural Science Foundation of Beijing under Grant No.4192007the National Natural Science Foundation of China under Grant No.61202076.
文摘Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cache replacement policies,thereby introducing performance variability in the application.To improve the accuracy of reuse of cache blocks in the presence of hardware prefetching,we propose Prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC).PAIC is designed with separate predictors for prefetch and demand requests,and uses machine learning to optimize reuse prediction in the presence of prefetching.By distinguishing reuse predictions for prefetch and demand requests,PAIC can better combine the performance benefits from prefetching and replacement policies.We evaluate PAIC on a set of 27 memory-intensive programs from the SPEC 2006 and SPEC 2017.Under single-core configuration,PAIC improves performance over Least Recently Used(LRU)replacement policy by 37.22%,compared with improvements of 32.93%for Signature-based Hit Predictor(SHiP),34.56%for Hawkeye,and 34.43%for Glider.Under the four-core configuration,PAIC improves performance over LRU by 20.99%,versus 13.23%for SHiP,17.89%for Hawkeye and 15.50%for Glider.
文摘Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively.
文摘Spark is a distributed data processing framework based on memory.Memory allocation is a focus question of Spark research.A good memory allocation scheme can effectively improve the efficiency of task execution and memory resource utilization of the Spark.Aiming at the memory allocation problem in the Spark2.x version,this paper optimizes the memory allocation strategy by analyzing the Spark memory model,the existing cache replacement algorithms and the memory allocation methods,which is on the basis of minimizing the storage area and allocating the execution area according to the demand.It mainly including two parts:cache replacement optimization and memory allocation optimization.Firstly,in the storage area,the cache replacement algorithm is optimized according to the characteristics of RDD Partition,which is combined with PCA dimension.In this section,the four features of RDD Partition are selected.When the RDD cache is replaced,only two most important features are selected by PCA dimension reduction method each time,thereby ensuring the generalization of the cache replacement strategy.Secondly,the memory allocation strategy of the execution area is optimized according to the memory requirement of Task and the memory space of storage area.In this paper,a series of experiments in Spark on Yarn mode are carried out to verify the effectiveness of the optimization algorithm and improve the cluster performance.
基金This work is supported by the National Natural Science Foundation of China(62072465)the Key-Area Research and Development Program of Guang Dong Province(2019B010107001).
文摘Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.
文摘Network processing in the current Internet is at the entirety of the data packet,which is problematic when encountering network congestion.The newly proposed Internet service named Qualitative Communication changes the network processing paradigm to an even finer granularity,namely chunk level,which obsoletes many existing networking policies and schemes,especially the caching algorithms and cache replacement policies that have been extensively explored in Web Caching,Content Delivery Networks(CDN)or Information-Centric Networks(ICN).This paper outlines all the new factors that are brought by random linear network coding-based Qualitative Communication and proves the importance and necessity of considering them.A novel metric is proposed by taking these new factors into consideration.An optimization problem is formulated to maximize the metric value of all retained chunks in the local storage of network nodes under the constraint of storage limit.A cache replacement scheme that obtains the optimal result in a recursive manner is proposed correspondingly.With the help of the introduced intelligent cache replacement algorithm,the performance evaluations show remarkably reduced end-to-end latency compared to the existing schemes in various network scenarios.
基金supported in part by the National Key R&D Program of China under Grant 2019YFB1802803in part by Beijing Municipal Natural Science Foundation under Grant L192028in part by the Nature Science Foundation of China under Grant 61801011
文摘In this work,we employ the cache-enabled UAV to provide context information delivery to end devices that make timely and intelligent decisions.Different from the traditional network traffic,context information varies with time and brings in the ageconstrained requirement.The cached content items should be refreshed timely based on the age status to guarantee the freshness of user-received contents,which however consumes additional transmission resources.The traditional cache methods separate the caching and the transmitting,which are not suitable for the dynamic context information.We jointly design the cache replacing and content delivery based on both the user requests and the content dynamics to maximize the offloaded traffic from the ground network.The problem is formulated based on the Markov Decision Process(MDP).A sufficient condition of cache replacing is found in closed form,whereby a dynamic cache replacing and content delivery scheme is proposed based on the Deep Q-Network(DQN).Extensive simulations have been conducted.Compared with the conventional popularity-based and the modified Least Frequently Used(i.e.,LFU-dynamic)schemes,the UAV can offload around 30%traffic from the ground network by utilizing the proposed scheme in the urban scenario,according to the simulation results.
基金supported by the Science and Technology Project of State Grid Corporation of China:Research and Application of Key Technologies in Virtual Operation of Information and Communication Resources.
文摘With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive services.To solve these problems,on basis of vehicle caching network,we propose an edge colla-borative caching scheme.Road side unit(RSU)and mobile edge computing(MEC)are used to collect vehicle information,predict and cache popular content,thereby provide low-latency content delivery services.However,the storage capa-city of a single RSU severely limits the edge caching performance and cannot handle intensive content requests at the same time.Through content sharing,col-laborative caching can relieve the storage burden on caching servers.Therefore,we integrate RSU and collaborative caching to build a MEC-assisted vehicle edge collaborative caching(MVECC)scheme,so as to realize the collaborative caching among cloud,edge and vehicle.MVECC uses deep reinforcement learning to pre-dict what needs to be cached on RSU,which enables RSUs to cache more popular content.In addition,MVECC also introduces a mobility-aware caching replace-ment scheme at the edge network to reduce redundant cache and improving cache efficiency,which allows RSU to dynamically replace the cached content in response to the mobility of vehicles.The simulation results show that the pro-posed MVECC scheme can improve cache performance in terms of energy cost and content hit rate.
基金supported by National Research Foundation of Korea under Grant No.2010-0016165supported by the IT R&D Program of MIC/IITA under Grant No.2007-S-016-02.
文摘Due to the proliferation of Internet and Intranet,the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users.In the distributed storage systems,a row can be directly accessed using a row key.We concentrate on a problem of efficient processing of queries whose predicate is on a column but not a row key.In this paper,we present a cache management technique,called DICE which maintains query results of range queries to support the next range queries.To accelerate the search time of the cached query results,we use modified Interval Ski Lists.In addition,we devise a novel cache replacement policy since DICE maintains an interval rather than a data item.Since our cache replacement policy considers the properties of intervals,our proposed technique is more efficient than traditional buffer replacement algorithms.Our experimental result demonstrates the efficiency of our proposed technique.
基金supported by the National Basic Research Program of China (2012CB315801, 2011CB302901)the Fundamental Research Funds for the Central Universities (2013RC0113)
文摘One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies.
基金supported by the National Basic Research Programs of China(2012CB315801,2011CB302901)the Fundamental Research Funds for the Central Universities(2013RC0113,2012RC0120)
文摘In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.