期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
A Cache Replacement Policy Based on Multi-Factors for Named Data Networking 被引量:1
1
作者 Meiju Yu Ru Li Yuwen Chen 《Computers, Materials & Continua》 SCIE EI 2020年第10期321-336,共16页
Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of... Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition. 展开更多
关键词 cache replacement policy named data networking content popularity FRESHNESS energy consumption
下载PDF
Network coding-aware cache replacement policy in on-demand broadcast environments
2
作者 陈君 LEE Victor C S CHAN Edward 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2012年第5期92-100,共9页
Network coding has been proved to be an effective technique in improving the performance of data broadcast systems because clients requesting different data items can be served simultaneously in one broadcast. Previou... Network coding has been proved to be an effective technique in improving the performance of data broadcast systems because clients requesting different data items can be served simultaneously in one broadcast. Previous studies showed that its efficiency is highly related to the content of clients' cache. However, existing data broadcast systems do not take network coding information into account when making cache replacement decisions. In this paper, we propose two networks coding-aware cache replacement policies called DLRU and DLRU-CP to supplement network coding assisted data broadcast in on-demand broadcast environments. In DLRU, both data access and decoding contribution are taken into account to make replacement decisions. DLRU-CP is based on DLRU but allows clients to retrieve decodable data items that have not been requested yet. The performance gain of our proposed cache replacement policies over traditional cache replacement policy is shown in the simulation results, which demonstrate conclusively that the proposed policies can effectively reduce the overall response time. 展开更多
关键词 Network coding cache replacement on-demand broadcast mobile computing
下载PDF
A Prefetch-Adaptive Intelligent Cache Replacement Policy Based on Machine Learning 被引量:2
3
作者 杨会静 方娟 +1 位作者 蔡旻 才智 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第2期391-404,共14页
Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cach... Hardware prefetching and replacement policies are two techniques to improve the performance of the memory subsystem.While prefetching hides memory latency and improves performance,interactions take place with the cache replacement policies,thereby introducing performance variability in the application.To improve the accuracy of reuse of cache blocks in the presence of hardware prefetching,we propose Prefetch-Adaptive Intelligent Cache Replacement Policy(PAIC).PAIC is designed with separate predictors for prefetch and demand requests,and uses machine learning to optimize reuse prediction in the presence of prefetching.By distinguishing reuse predictions for prefetch and demand requests,PAIC can better combine the performance benefits from prefetching and replacement policies.We evaluate PAIC on a set of 27 memory-intensive programs from the SPEC 2006 and SPEC 2017.Under single-core configuration,PAIC improves performance over Least Recently Used(LRU)replacement policy by 37.22%,compared with improvements of 32.93%for Signature-based Hit Predictor(SHiP),34.56%for Hawkeye,and 34.43%for Glider.Under the four-core configuration,PAIC improves performance over LRU by 20.99%,versus 13.23%for SHiP,17.89%for Hawkeye and 15.50%for Glider. 展开更多
关键词 hardware prefetching machine learning Prefetch-Adaptive Intelligent cache replacement Policy(PAIC) replacement policy
原文传递
MemSC: A Scan-Resistant and Compact Cache Replacement Framework for Memory-Based Key-Value Cache Systems 被引量:2
4
作者 Mei Li Hong-Jun Zhang +1 位作者 Yan-Jun Wu Chen Zhao 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第1期55-67,共13页
Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end ... Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively. 展开更多
关键词 key-value cache system cache replacement scan resistance space efficiency
原文传递
A Dynamic Memory Allocation Optimization Mechanism Based on Spark 被引量:2
5
作者 Suzhen Wang Shanshan Geng +7 位作者 Zhanfeng Zhang Anshan Ye Keming Chen Zhaosheng Xu Huimin Luo Gangshan Wu Lina Xu Ning Cao 《Computers, Materials & Continua》 SCIE EI 2019年第8期739-757,共19页
Spark is a distributed data processing framework based on memory.Memory allocation is a focus question of Spark research.A good memory allocation scheme can effectively improve the efficiency of task execution and mem... Spark is a distributed data processing framework based on memory.Memory allocation is a focus question of Spark research.A good memory allocation scheme can effectively improve the efficiency of task execution and memory resource utilization of the Spark.Aiming at the memory allocation problem in the Spark2.x version,this paper optimizes the memory allocation strategy by analyzing the Spark memory model,the existing cache replacement algorithms and the memory allocation methods,which is on the basis of minimizing the storage area and allocating the execution area according to the demand.It mainly including two parts:cache replacement optimization and memory allocation optimization.Firstly,in the storage area,the cache replacement algorithm is optimized according to the characteristics of RDD Partition,which is combined with PCA dimension.In this section,the four features of RDD Partition are selected.When the RDD cache is replaced,only two most important features are selected by PCA dimension reduction method each time,thereby ensuring the generalization of the cache replacement strategy.Secondly,the memory allocation strategy of the execution area is optimized according to the memory requirement of Task and the memory space of storage area.In this paper,a series of experiments in Spark on Yarn mode are carried out to verify the effectiveness of the optimization algorithm and improve the cluster performance. 展开更多
关键词 Memory calculation memory allocation optimization cache replacement optimization
下载PDF
ECC:Edge Collaborative Caching Strategy for Differentiated Services Load-Balancing 被引量:1
6
作者 Fang Liu Zhenyuan Zhang +1 位作者 Zunfu Wang Yuting Xing 《Computers, Materials & Continua》 SCIE EI 2021年第11期2045-2060,共16页
Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve th... Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm. 展开更多
关键词 Edge collaborative caching differentiated service cache replacement strategy load balancing
下载PDF
Optimal chunk caching in network coding-based qualitative communication
7
作者 Lijun Dong Richard Li 《Digital Communications and Networks》 SCIE CSCD 2022年第1期44-50,共7页
Network processing in the current Internet is at the entirety of the data packet,which is problematic when encountering network congestion.The newly proposed Internet service named Qualitative Communication changes th... Network processing in the current Internet is at the entirety of the data packet,which is problematic when encountering network congestion.The newly proposed Internet service named Qualitative Communication changes the network processing paradigm to an even finer granularity,namely chunk level,which obsoletes many existing networking policies and schemes,especially the caching algorithms and cache replacement policies that have been extensively explored in Web Caching,Content Delivery Networks(CDN)or Information-Centric Networks(ICN).This paper outlines all the new factors that are brought by random linear network coding-based Qualitative Communication and proves the importance and necessity of considering them.A novel metric is proposed by taking these new factors into consideration.An optimization problem is formulated to maximize the metric value of all retained chunks in the local storage of network nodes under the constraint of storage limit.A cache replacement scheme that obtains the optimal result in a recursive manner is proposed correspondingly.With the help of the introduced intelligent cache replacement algorithm,the performance evaluations show remarkably reduced end-to-end latency compared to the existing schemes in various network scenarios. 展开更多
关键词 Internet Qualitative communication New IP Chunk caching Random linear network coding End-to-end latency cache replacement policy Degree of freedom Distance Packet size
下载PDF
Age-Constrained Dynamic Content Replacing and Delivering for UAV-Assisted Context Awareness
8
作者 Liudi Wang Shan Zhang +1 位作者 Xishuo Li Hongbin Luo 《China Communications》 SCIE CSCD 2022年第7期277-293,共17页
In this work,we employ the cache-enabled UAV to provide context information delivery to end devices that make timely and intelligent decisions.Different from the traditional network traffic,context information varies ... In this work,we employ the cache-enabled UAV to provide context information delivery to end devices that make timely and intelligent decisions.Different from the traditional network traffic,context information varies with time and brings in the ageconstrained requirement.The cached content items should be refreshed timely based on the age status to guarantee the freshness of user-received contents,which however consumes additional transmission resources.The traditional cache methods separate the caching and the transmitting,which are not suitable for the dynamic context information.We jointly design the cache replacing and content delivery based on both the user requests and the content dynamics to maximize the offloaded traffic from the ground network.The problem is formulated based on the Markov Decision Process(MDP).A sufficient condition of cache replacing is found in closed form,whereby a dynamic cache replacing and content delivery scheme is proposed based on the Deep Q-Network(DQN).Extensive simulations have been conducted.Compared with the conventional popularity-based and the modified Least Frequently Used(i.e.,LFU-dynamic)schemes,the UAV can offload around 30%traffic from the ground network by utilizing the proposed scheme in the urban scenario,according to the simulation results. 展开更多
关键词 UAV offloading context awareness onboard caching cache replacing content freshness
下载PDF
Deep Reinforcement Learning Empowered Edge Collaborative Caching Scheme for Internet of Vehicles
9
作者 Xin Liu Siya Xu +4 位作者 Chao Yang Zhili Wang Hao Zhang Jingye Chi Qinghan Li 《Computer Systems Science & Engineering》 SCIE EI 2022年第7期271-287,共17页
With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive servi... With the development of internet of vehicles,the traditional centralized content caching mode transmits content through the core network,which causes a large delay and cannot meet the demands for delay-sensitive services.To solve these problems,on basis of vehicle caching network,we propose an edge colla-borative caching scheme.Road side unit(RSU)and mobile edge computing(MEC)are used to collect vehicle information,predict and cache popular content,thereby provide low-latency content delivery services.However,the storage capa-city of a single RSU severely limits the edge caching performance and cannot handle intensive content requests at the same time.Through content sharing,col-laborative caching can relieve the storage burden on caching servers.Therefore,we integrate RSU and collaborative caching to build a MEC-assisted vehicle edge collaborative caching(MVECC)scheme,so as to realize the collaborative caching among cloud,edge and vehicle.MVECC uses deep reinforcement learning to pre-dict what needs to be cached on RSU,which enables RSUs to cache more popular content.In addition,MVECC also introduces a mobility-aware caching replace-ment scheme at the edge network to reduce redundant cache and improving cache efficiency,which allows RSU to dynamically replace the cached content in response to the mobility of vehicles.The simulation results show that the pro-posed MVECC scheme can improve cache performance in terms of energy cost and content hit rate. 展开更多
关键词 Internet of vehicles vehicle caching network collaborative caching caching replacement deep reinforcement learning
下载PDF
DICE:An Effective Query Result Cache for Distributed Storage Systems 被引量:1
10
作者 Jun-Ki Min Member, A CM Mi-Young Lee 《Journal of Computer Science & Technology》 SCIE EI CSCD 2010年第5期933-944,共12页
Due to the proliferation of Internet and Intranet,the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users.In the... Due to the proliferation of Internet and Intranet,the distributed storage systems have received a lot of attention. These systems span a large number of machines and store huge amount of data for a lot of users.In the distributed storage systems,a row can be directly accessed using a row key.We concentrate on a problem of efficient processing of queries whose predicate is on a column but not a row key.In this paper,we present a cache management technique,called DICE which maintains query results of range queries to support the next range queries.To accelerate the search time of the cached query results,we use modified Interval Ski Lists.In addition,we devise a novel cache replacement policy since DICE maintains an interval rather than a data item.Since our cache replacement policy considers the properties of intervals,our proposed technique is more efficient than traditional buffer replacement algorithms.Our experimental result demonstrates the efficiency of our proposed technique. 展开更多
关键词 distributed system range query query caching Interval Skip List cache replacement
原文传递
Fast convergence caching replacement algorithm based on dynamic classification for content-centric networks 被引量:1
11
作者 FANG Chao HUANG Tao +2 位作者 LIU Jiang CHEN Jian-ya LIU Yun-jie 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2013年第5期45-50,共6页
One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategi... One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies. 展开更多
关键词 CCN cache replacement policy dynamic classification fast convergence category popularity
原文传递
Design of in-network caching scheme in CCN based on grey relational analysis
12
作者 CUI Xian-dong HUANG Tao +3 位作者 LIU Jiang LI Li CHEN Jian-ya LIU Yun-jie 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2014年第2期1-8,共8页
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network cachin... In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme. 展开更多
关键词 CCN in-network caching request influence degree cache replacement rate grey relational analysis
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部