期刊文献+
共找到4,941篇文章
< 1 2 248 >
每页显示 20 50 100
Performance Behaviour Analysis of the Present 3-Level Cache System for Multi-Core Processors
1
作者 Muhammad Ali Ismail 《Computer Technology and Application》 2012年第11期729-733,共5页
In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core proces... In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core processors is developed and its possible performance has been analyzed with the increase in number of cores. Various important performance parameters like access time and utilization of individual cache at different level and overall average access time of the cache system is determined. Results for up to 1024 cores have been reported in this paper. 展开更多
关键词 MULTI-CORE memory hierarchy cache access time queuing analysis.
下载PDF
Multi-Level Web Cache Model Used in Data Grid Application
2
作者 CHEN Lei LI Sanli 《Wuhan University Journal of Natural Sciences》 CAS 2006年第5期1216-1221,共6页
This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for ... This paper proposed a novel multilevel data cache model by Web cache (MDWC) based on network cost in data grid. By constructing a communicating tree of grid sites based on network cost and using a single leader for each data segment within each region, the MDWC makes the most use of the Web cache of other sites whose bandwidth is as broad as covering the job executing site. The experiment result indicates that the MDWC reduces data response time and data update cost by avoiding network congestions while designing on the parameters concluded by the environment of application. 展开更多
关键词 Web cache data grid COHERENCE REPLICA
下载PDF
Efficient cache replacement framework based on access hotness for spacecraft processors
3
作者 GAO Xin NIAN Jiawei +1 位作者 LIU Hongjin YANG Mengfei 《中国空间科学技术(中英文)》 CSCD 北大核心 2024年第2期74-88,共15页
A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity... A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy. 展开更多
关键词 spacecraft processors cache management replacement policy storage efficiency memory hierarchy MICROARCHITECTURE
下载PDF
Power Information System Database Cache Model Based on Deep Machine Learning
4
作者 Manjiang Xing 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1081-1090,共10页
At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems ba... At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly. 展开更多
关键词 Deep machine learning power information system DATABASE cache model
下载PDF
Request pattern change-based cache pollution attack detection and defense in edge computing
5
作者 Junwei Wang Xianglin Wei +3 位作者 Jianhua Fan Qiang Duan Jianwei Liu Yangang Wang 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1212-1220,共9页
Through caching popular contents at the network edge,wireless edge caching can greatly reduce both the content request latency at mobile devices and the traffic burden at the core network.However,popularity-based cach... Through caching popular contents at the network edge,wireless edge caching can greatly reduce both the content request latency at mobile devices and the traffic burden at the core network.However,popularity-based caching strategies are vulnerable to Cache Pollution Attacks(CPAs)due to the weak security protection at both edge nodes and mobile devices.In CPAs,through initiating a large number of requests for unpopular contents,malicious users can pollute the edge caching space and degrade the caching efficiency.This paper firstly integrates the dynamic nature of content request and mobile devices into the edge caching framework,and introduces an eavesdroppingbased CPA strategy.Then,an edge caching mechanism,which contains a Request Pattern Change-based Cache Pollution Detection(RPC2PD)algorithm and an Attack-aware Cache Defense(ACD)algorithm,is proposed to defend against CPAs.Simulation results show that the proposed mechanism could effectively suppress the effects of CPAs on the caching performance and improve the cache hit ratio. 展开更多
关键词 Mobile edge computing cache pollution attack Flash crowd
下载PDF
Cache Coherency Design in Pentium Ⅲ SMP System 被引量:1
6
作者 LIU Jinsong ZHANG Jiangling GU Xiwu 《Wuhan University Journal of Natural Sciences》 CAS 2006年第2期360-364,共5页
This paper analyzes cache coherency mechanism from the view of system. It firstly discusses caehe-memory hierarchy of Pentium Ⅲ SMP system, including memory area distribution, cache attributes control and bus transac... This paper analyzes cache coherency mechanism from the view of system. It firstly discusses caehe-memory hierarchy of Pentium Ⅲ SMP system, including memory area distribution, cache attributes control and bus transaction. Secondly it analyzes hardware snoopy mechanism of P6 bus and MESI state transitions adopted by Pentium Ⅲ. Based on these, it focuses on how muhiprocessors and the P6 bus cooperate to ensure cache coherency of the whole system, and gives the key of cache coherency design. 展开更多
关键词 snoop cache coherency MESI protocol P6bus Pentium SMP system
下载PDF
Joint Optimization of Satisfaction Index and Spectrum Efficiency with Cache Restricted for Resource Allocation in Multi-Beam Satellite Systems 被引量:4
7
作者 Pei Zhang Xiaohui Wang +1 位作者 Zhiguo Ma Junde Song 《China Communications》 SCIE CSCD 2019年第2期189-201,共13页
Dynamic resource allocation(DRA) is a key technology to improve system performances in GEO multi-beam satellite systems. And, since the cache resource on the satellite is very valuable and limited, DRA problem under r... Dynamic resource allocation(DRA) is a key technology to improve system performances in GEO multi-beam satellite systems. And, since the cache resource on the satellite is very valuable and limited, DRA problem under restricted cache resources is also an important issue to be studied. This paper mainly investigates the DRA problem of carrier resources under certain cache constraints. What's more, with the aim to satisfy all users' traffic demands as more as possible, and to maximize the utilization of the bandwidth, we formulate a multi-objective optimization problem(MOP) where the satisfaction index and the spectrum efficiency are jointly optimized. A modified strategy SA-NSGAII which combines simulated annealing(SA) and non-dominated sorted genetic algorithm-II(NSGAII) is proposed to approximate the Pareto solution to this MOP problem. Simulation results show the effectiveness of the proposed algorithm in terms of satisfaction index, spectrum efficiency, occupied cache, and etc. 展开更多
关键词 GEO MULTI-BEAM satellite system dynamic resource ALLOCATION SA-NSGAII cache SATISFACTION index spectrum efficiency
下载PDF
Cache in fog computing design,concepts,contributions,and security issues in machine learning prospective
8
作者 Muhammad Ali Naeem Yousaf Bin Zikria +3 位作者 Rashid Ali Usman Tariq Yahui Meng Ali Kashif Bashir 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1033-1052,共20页
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promisi... The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning. 展开更多
关键词 Internet of things Cloud computing Fog computing CACHING LATENCY
下载PDF
Shared Cache Based on Content Addressable Memory in a Multi-Core Architecture
9
作者 Allam Abumwais Mahmoud Obaid 《Computers, Materials & Continua》 SCIE EI 2023年第3期4951-4963,共13页
Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to acc... Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors. 展开更多
关键词 Multi-core processor shared cache content addressable memory dual port CAM replacement algorithm benchmark program
下载PDF
A Time Pattern-Based Intelligent Cache Optimization Policy on Korea Advanced Research Network
10
作者 Waleed Akbar Afaq Muhammad Wang-Cheol Song 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3743-3759,共17页
Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as ... Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as a result of an increase in users.Users in the same region or company frequently ask for similar material,especially on social media platforms.The subsequent request for the same content can be satisfied from the edge if stored in proximity to the user.Applications that require relatively low latency can use Content Delivery Network(CDN)technology to meet their requirements.An edge and the data center con-stitute the CDN architecture.To fulfill requests from the edge and minimize the impact on the network,the requested content can be buffered closer to the user device.Which content should be kept on the edge is the primary concern.The cache policy has been optimized using various conventional and unconventional methods,but they have yet to include the timestamp beside a video request.The 24-h content request pattern was obtained from publicly available datasets.The popularity of a video is influenced by the time of day,as shown by a time-based video profile.We present a cache optimization method based on a time-based pat-tern of requests.The problem is described as a cache hit ratio maximization pro-blem emphasizing a relevance score and machine learning model accuracy.A model predicts the video to be cached in the next time stamp,and the relevance score identifies the video to be removed from the cache.Afterwards,we gather the logs and generate the content requests using an extracted video request pattern.These logs are pre-processed to create a dataset divided into three-time slots per day.A Long short-term memory(LSTM)model is trained on this dataset to forecast the video at the next time interval.The proposed optimized caching policy is evaluated on our CDN architecture deployed on the Korean Advanced Research Network(KOREN)infrastructure.Our findings demonstrate how add-ing time-based request patterns impacts the system by increasing the cache hit rate.To show the effectiveness of the proposed model,we compare the results with state-of-the-art techniques. 展开更多
关键词 Multimedia content delivery request pattern recognition real-time machine learning deep learning optimization CACHING edge computing
下载PDF
MemSC: A Scan-Resistant and Compact Cache Replacement Framework for Memory-Based Key-Value Cache Systems 被引量:2
11
作者 Mei Li Hong-Jun Zhang +1 位作者 Yan-Jun Wu Chen Zhao 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第1期55-67,共13页
Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end ... Memory-based key-value cache systems, such as Memcached and Redis, have become indispensable components of data center infrastructures and have been used to cache performance-critical data to avoid expensive back-end database accesses. As the memory is usually not large enough to hold all the items, cache replacement must be performed to evict some cached items to make room for the newly coming items when there is no free space. Many real-world workloads target small items and have frequent bursts of scans (a scan is a sequence of one-time access requests). The commonly used LRU policy does not work well under such workloads since LRU needs a large amount of metadata and tends to discard hot items with scans. Small decreases in hit ratio can result in large end-to-end losses in these systems. This paper presents MemSC, which is a scan-resistant and compact cache replacement framework for Memcached. MemSC assigns a multi-granularity reference flag for each item, which requires only a few bits (two bits are enough for general use) per item to support scanresistant cache replacement policies. To evaluate MemSC, we implement three representative cache replacement policies (MemSC-HM, MemSC-LH, and MemSC-LF) on MemSC and test them using various workloads. The experimental results show that MemSC outperforms prior techniques. Compared with the optimized LRU policy in Memcached, MemSC-LH reduces the cache miss ratio and the memory usage of the resulting system by up to 23% and 14% respectively. 展开更多
关键词 key-value cache system cache replacement scan resistance space efficiency
原文传递
金属外壳设计、超大容量SLC Cache 英睿达X8移动SSD 4TB测试
12
作者 马宇川(文/图) 《微型计算机》 2023年第8期60-63,共4页
得益于NAND闪存越来越先进的生产工艺,越来越低的生产成本,近期厂商也推出了不少大容量SSD,如美光旗下品牌英睿达就推出了4TB容量的英睿达X8移动SSD。相对其他产品,这款SSD不仅拥有超大的存储容量,在性能表现上也并不孱弱,借助USB 3.2 G... 得益于NAND闪存越来越先进的生产工艺,越来越低的生产成本,近期厂商也推出了不少大容量SSD,如美光旗下品牌英睿达就推出了4TB容量的英睿达X8移动SSD。相对其他产品,这款SSD不仅拥有超大的存储容量,在性能表现上也并不孱弱,借助USB 3.2 Gen 2接口,其标称顺序传输速度也能达到1050MB/s。此外它还采用了扎实的金属外壳设计,比一般基于塑料外壳的移动SSD看上去更有档次。那么在实际使用中,这款移动SSD能为我们带来怎样的体验呢? 展开更多
关键词 NAND闪存 存储容量 cache 塑料外壳 SSD USB 超大容量 传输速度
下载PDF
GFCache: A Greedy Failure Cache Considering Failure Recency and Failure Frequency for an Erasure-Coded Storage System
13
作者 Mingzhu Deng Fang Liu +2 位作者 Ming Zhao Zhiguang Chen Nong Xiao 《Computers, Materials & Continua》 SCIE EI 2019年第1期153-167,共15页
In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data... In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data is recovered on-the-fly to serve the ongoing read request.However,those newly revived data is discarded after serving the request,due to the assumption that data experiencing temporary failures could come back alive later.Such disposal of failure data prevents the sharing of failure information among clients,and leads to many unnecessary data recovery processes,(e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end,this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data,but also sequential failure-likely data in the same stripe.Additionally,GFCache includes a FARC(Failure ARC)catch replacement algorithm,which features a balanced consideration of failure recency,frequency to accommodate data corruption with good hit ratio.The stored data in GFCache is able to support fast read of the normal data access.Furthermore,since GFCache is a generic failure cache,it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. 展开更多
关键词 FAILURE cache GREEDY recovery ERASURE coding FAILURE RECENCY FAILURE frequency
下载PDF
基于AES算法的Cache Hit旁路攻击 被引量:8
14
作者 邓高明 赵强 +1 位作者 张鹏 陈开颜 《计算机工程》 CAS CSCD 北大核心 2008年第13期113-114,129,共3页
AES加密快速实现中利用了查表操作,查表的索引值会影响Cache命中率和加密时间,而查表的索引值和密钥存在密切关系。通过分析AES最后一轮加密过程中查表索引值与密文和最后一轮子密钥的关系,以及它们对Cache命中与否和加密时间长短的影响... AES加密快速实现中利用了查表操作,查表的索引值会影响Cache命中率和加密时间,而查表的索引值和密钥存在密切关系。通过分析AES最后一轮加密过程中查表索引值与密文和最后一轮子密钥的关系,以及它们对Cache命中与否和加密时间长短的影响,提出一种利用Cachehit信息作为旁路信息对AES进行旁路攻击的技术,在Intel Celeron 1.99GHz和Pentium 43.6GHz CPU的环境中,分别在221和225个随机明文样本的条件下,在5min内恢复了OpenSSLv.0.9.8(a)库中AES的128bit密钥,并介绍防御这种攻击途径的手段。 展开更多
关键词 旁路攻击 cache命中 AES算法
下载PDF
基于Cache Missing的RSA计时攻击 被引量:4
15
作者 陈财森 王韬 +1 位作者 陈建泗 陈琪 《微电子学与计算机》 CSCD 北大核心 2009年第5期180-182,186,共4页
由于同步多线程允许多个执行线程之间共享处理器的执行单元,为共享Cache存储器提供了线程间一个实现简单、高带宽的隐通道,使得一个恶意线程能够监视其他线程访问的资源.以OpenSSL0.9.7c实现的RSA算法为攻击对象,通过执行一个间谍线程,... 由于同步多线程允许多个执行线程之间共享处理器的执行单元,为共享Cache存储器提供了线程间一个实现简单、高带宽的隐通道,使得一个恶意线程能够监视其他线程访问的资源.以OpenSSL0.9.7c实现的RSA算法为攻击对象,通过执行一个间谍线程,监视密码线程,观测RSA解密时读取Cache数据变化时反应的时间特性,通过分析这些时间信息推论出RSA的解密密钥.最后介绍了如何减轻甚至消除这种攻击的建议. 展开更多
关键词 RSA 同步多线程 cache 滑动窗口
下载PDF
基于远程硬件实验系统的多流水带Cache CPU设计 被引量:1
16
作者 陈永强 全成斌 李山山 《实验技术与管理》 CAS 北大核心 2012年第10期86-88,100,共4页
介绍了在清华大学计算机系开发的计算机远程硬件实验系统上进行的远程CPU设计,设计了一个4级流水线的CPU,能够支持算术运算、逻辑运算、条件转移、访问内存等功能。通过CPU的设计、调试,加深学生对于硬件编程、CPU流水线结构、数据相关... 介绍了在清华大学计算机系开发的计算机远程硬件实验系统上进行的远程CPU设计,设计了一个4级流水线的CPU,能够支持算术运算、逻辑运算、条件转移、访问内存等功能。通过CPU的设计、调试,加深学生对于硬件编程、CPU流水线结构、数据相关处理、存储系统的执行方式等很多方面的理解和认识。 展开更多
关键词 远程硬件实验系统 多流水 cache CPU
下载PDF
An Algorithm Based on Markov Chain to Improve Edge Cache Hit Ratio for Blockchain-Enabled IoT 被引量:12
17
作者 Hongman Wang Yingxue Li +1 位作者 Xiaoqi Zhao Fangchun Yang 《China Communications》 SCIE CSCD 2020年第9期66-76,共11页
Reasonable allocation of storage and computing resources is the basis of building big data system.With the development of IoT(Internet of Things),more data will be brought.A three-layer architecture includes smart dev... Reasonable allocation of storage and computing resources is the basis of building big data system.With the development of IoT(Internet of Things),more data will be brought.A three-layer architecture includes smart devices layer,edge cloud layer and blockchain-based distributed cloud layer.Blockchain is used in IoT for building a distributed decentralize P2P architecture to deal with the secure issue while edge computing deals with increasing volume of data.Edge caching is one of the important application scenarios.In order to allocate edge cache resources reasonably,to improve the quality of service and to reduce the waste of bandwidth resources,this paper proposes a content selection algorithm of edge cache nodes.The algorithm adopts markov chain model,improves the utilization of cache space and reduces the content transmission delay.The hierarchical caching strategy is adopted and the secondary cache stores slides of contents to expand the coverage of cached content and to reduce user waiting time.Regional node cooperation is adopted to expand the cache space and to support the regional preference of cache content.Compared with the classical substitution algorithm,simulation results show that the algorithm in this paper has higher cache hit ratio and higher space utilization. 展开更多
关键词 cache resource allocation blockchain-enabled iot edge computing Markov chain hierarchical caching technique
下载PDF
Digest与Web Cache Digest协议 被引量:1
18
作者 谭劲 余胜生 周敬利 《计算机应用研究》 CSCD 北大核心 2002年第12期139-140,146,共3页
WebCache系统对于减少服务器负荷、降低Internet的流量和连接成本,提高用户访问速度并实现负载均衡有极其重要的作用。在介绍WebCache的工作原理和协议的基础上,从功能、工作机制、算法等对WebCacheDigest协议进行了详细的分析与描述。
关键词 DIGEST Web cacheDigest协议 通信协议 ICP协议 HASH函数 INTERNET
下载PDF
调整数组大小——一种减少 Cache 失效率的有效方法 被引量:1
19
作者 陈杰 陆鑫达 《上海交通大学学报》 EI CAS CSCD 北大核心 1997年第8期44-48,共5页
循环分块是提高Cache命中率的有效途径,但循环分块后仍然存在Cache的干扰失效问题.循环分块的副作用也是影响程序效率的一个重要因素.
关键词 失效率 编译程序 cache 调整数组大小法
下载PDF
磁盘阵列 Cache 自适应预读算法的研究 被引量:2
20
作者 王作新 郑乐黎 《华南理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 1997年第5期4-9,共6页
提出了一种磁盘阵列Cache的算法,它使用自适应的预读策略根据以往磁盘访问的信息来较精确地预测下一次访问的磁盘地址,并预先读出到Cache中,从而降低磁盘访问的平均服务时间。讨论了在多任务环境下的适配算法。模拟测试的... 提出了一种磁盘阵列Cache的算法,它使用自适应的预读策略根据以往磁盘访问的信息来较精确地预测下一次访问的磁盘地址,并预先读出到Cache中,从而降低磁盘访问的平均服务时间。讨论了在多任务环境下的适配算法。模拟测试的结果表明:本算法比LRU算法优越。 展开更多
关键词 磁盘阵列 高速缓冲存贮器 预读 自适应 适配表
下载PDF
上一页 1 2 248 下一页 到第
使用帮助 返回顶部