期刊文献+
共找到32篇文章
< 1 2 >
每页显示 20 50 100
Correlation-Aware Replica Prefetching Strategy to Decrease Access Latency in Edge Cloud
1
作者 Yang Liang Zhigang Hu +1 位作者 Xinyu Zhang Hui Xiao 《China Communications》 SCIE CSCD 2021年第9期249-264,共16页
With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficien... With the number of connected devices increasing rapidly,the access latency issue increases drastically in the edge cloud environment.Massive low time-constrained and data-intensive mobile applications require efficient replication strategies to decrease retrieval time.However,the determination of replicas is not reasonable in many previous works,which incurs high response delay.To this end,a correlation-aware replica prefetching(CRP)strategy based on the file correlation principle is proposed,which can prefetch the files with high access probability.The key is to determine and obtain the implicit high-value files effectively,which has a significant impact on the performance of CRP.To achieve the goal of accelerating the acquisition of implicit highvalue files,an access rule management method based on consistent hashing is proposed,and then the storage and query mechanisms for access rules based on adjacency list storage structure are further presented.The theoretical analysis and simulation results corroborate that CRP shortens average response time over 4.8%,improves average hit ratio over 4.2%,reduces transmitting data amount over 8.3%,and maintains replication frequency at a reasonable level when compared to other schemes. 展开更多
关键词 edge cloud access latency replica prefetching correlation-aware access rule
下载PDF
Occlusion Culling Algorithm Using Prefetching and Adaptive Level of Detail Techni
2
作者 郑福仁 战守义 杨兵 《Journal of Beijing Institute of Technology》 EI CAS 2006年第4期425-430,共6页
A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, a... A novel approach that integrates occlusion culling within the view-dependent rendering framework is proposed. The algorithm uses the prioritized-layered projection(PLP) algorithm to occlude those obscured objects, and uses an approximate visibility technique to accurately and efficiently determine which objects will be visible in the coming future and prefetch those objects from disk before they are rendered, view-dependent rendering technique provides the ability to change level of detail over the surface seamlessly and smoothly in real-time according to cell solidity value. 展开更多
关键词 occlusion culling prefetching adaptive level of detail(LOD) approximate algorithm conservative algorithm
下载PDF
Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy
3
作者 Dongjie Zhu Haiwen Du +6 位作者 Yundong Sun Xiaofang Li Rongning Qu Hao Hu Shuangshuang Dong Helen Min Zhou Ning Cao 《Computers, Materials & Continua》 SCIE EI 2020年第5期979-993,共15页
In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches d... In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time. 展开更多
关键词 Massive files prefetching model cache transaction distributed storage systems LSTM neural network
下载PDF
A Comparison Study between Informed and Predictive Prefetching Mechanisms for I/O Storage Systems
4
作者 Maen M. Al Assaf Ali Rodan +1 位作者 Mohammad Qatawneh Mohamed Riduan Abid 《International Journal of Communications, Network and System Sciences》 2015年第5期181-186,共6页
In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focu... In this paper, we present a comparative study between informed and predictive prefetching mechanisms that were presented to leverage the performance gap between I/O storage systems and CPU. In particular, we will focus on transparent informed prefetching (TIP) and predictive prefetching using probability graph approach (PG). Our main objective is to show the main features, motivations, and implementation overview of each mechanism. We also conducted a performance evaluation discussion that shows a comparison between both mechanisms performance when using different cache size values. 展开更多
关键词 INFORMED prefetching PREDICTIVE prefetching PROBABILITY GRAPH Parallel Storage Systems
下载PDF
Predictive Prefetching for Parallel Hybrid Storage Systems
5
作者 Maen M. Al Assaf 《International Journal of Communications, Network and System Sciences》 2015年第5期161-180,共20页
In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of ... In this paper, we present a predictive prefetching mechanism that is based on probability graph approach to perform prefetching between different levels in a parallel hybrid storage system. The fundamental concept of our approach is to invoke parallel hybrid storage system’s parallelism and prefetch data among multiple storage levels (e.g. solid state disks, and hard disk drives) in parallel with the application’s on-demand I/O reading requests. In this study, we show that a predictive prefetching across multiple storage levels is an efficient technique for placing near future needed data blocks in the uppermost levels near the application. Our PPHSS approach extends previous ideas of predictive prefetching in two ways: (1) our approach reduces applications’ execution elapsed time by keeping data blocks that are predicted to be accessed in the near future cached in the uppermost level;(2) we propose a parallel data fetching scheme in which multiple fetching mechanisms (i.e. predictive prefetching and application’s on-demand data requests) can work in parallel;where the first one fetches data blocks among the different levels of the hybrid storage systems (i.e. low-level (slow) to high-level (fast) storage devices) and the other one fetches the data from the storage system to the application. Our PPHSS strategy integrated with the predictive prefetching mechanism significantly reduces overall I/O access time in a hybrid storage system. Finally, we developed a simulator to evaluate the performance of the proposed predictive prefetching scheme in the context of hybrid storage systems. Our results show that our PPHSS can improve system performance by 4% across real-world I/O traces without the need of using large size caches. 展开更多
关键词 PREDICTIVE prefetching PROBABILITY GRAPH PARALLEL STORAGE Systems Hybrid STORAGE System
下载PDF
Web Acceleration by Prefetching in Extremely Large Latency Network
6
作者 Fumiaki Nagase Takefumi Hiraguri +1 位作者 Kentaro Nishimori Hideo Makino 《American Journal of Operations Research》 2012年第3期339-347,共9页
A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-... A scheme for high-speed data transfer via the Internet for Web service in an extremely large delay environment is proposed. With the wide-spread use of Internet services in recent years, WLAN Internet service in high-speed trains has commenced. The system for this is composed of a satellite communication system between the train and the ground station, which is characterized by extremely large latency of several hundred milliseconds due to long propagation latency. High-speed web access is not available to users in a train in such an extremely large latency network system. Thus, a prefetch scheme for performance acceleration of Web services in this environment is proposed. A test-bed system that implements the proposed scheme is implemented and is its performance in this test-bed is evaluated. The proposed scheme is verified to enable high-speed Web access in the extremely large delay environment compared to conventional schemes. 展开更多
关键词 Extremely-Large-Latency NETWORK Satellite Communication HTTP Web prefetching prefetching Proxy SERVER Information Storage SERVER
下载PDF
Adaptive Cache Allocation with Prefetching Policy over End-to-End Data Processing
7
作者 Hang Qin Li Zhu 《Journal of Signal and Information Processing》 2017年第3期152-160,共9页
With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysi... With the speed gap between storage system access and processor computing, end-to-end data processing has become a bottleneck to improve the total performance of computer systems over the Internet. Based on the analysis of data processing behavior, an adaptive cache organization scheme is proposed with fast address calculation. This scheme can make full use of the characteristics of stack space data access, adopt fast address calculation strategy, and reduce the hit time of stack access. Adaptively, the stack cache can be turned off from beginning to end, when a stack overflow occurs to avoid the effect of stack switching on processor performance. Also, through the instruction cache and the failure behavior for the data cache, a prefetching policy is developed, which is combined with the data capture of the failover queue state. Finally, the proposed method can maintain the order of instruction and data access, which facilitates the extraction of prefetching in the end-to-end data processing. 展开更多
关键词 END-TO-END Data Processing STORAGE System CACHE prefetching
下载PDF
Taxonomy of Data Prefetching for Multicore Processors 被引量:1
8
作者 Surendra Byna 陈勇 孙贤和 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第3期405-417,共13页
Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support,... Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been developed for single-core processors. Recent developments in processor technology have brought multicore processors into mainstream. While some of the single-core prefetching techniques are directly applicable to multicore processors, numerous novel strategies have been proposed in the past few years to take advantage of multiple cores. This paper aims to provide a comprehensive review of the state-of-the-art prefetching techniques, and proposes a taxonomy that classifies various design concerns in developing a prefetching strategy, especially for multicore processors. We compare various existing methods through analysis as well. 展开更多
关键词 taxonomy of prefetching strategies multicore processors data prefetching memory hierarchy
原文传递
An SPN-Based Integrated Model for Web Prefetching and Caching 被引量:15
9
作者 石磊 韩英杰 +2 位作者 丁晓光 卫琳 古志民 《Journal of Computer Science & Technology》 SCIE EI CSCD 2006年第4期482-489,共8页
The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the... The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems. 展开更多
关键词 stochastic Petri nets web prefetching web caching performance evaluation
原文传递
Prefetching J^+-Tree:A Cache-Optimized Main Memory Database Index Structure 被引量:3
10
作者 栾华 杜小勇 王珊 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第4期687-707,共21页
As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfo... As the speed gap between main memory and modern processors continues to widen, the cache behavior becomes more important for main memory database systems (MMDBs). Indexing technique is a key component of MMDBs. Unfortunately, the predominant indexes -B^+-trees and T-trees -- have been shown to utilize cache poorly, which triggers the development of many cache-conscious indexes, such as CSB^+-trees and pB^+-trees. Most of these cache-conscious indexes are variants of conventional B^+-trees, and have better cache performance than B^+-trees. In this paper, we develop a novel J^+-tree index, inspired by the Judy structure which is an associative array data structure, and propose a more cacheoptimized index -- Prefetching J^+-tree (pJ^+-tree), which applies prefetching to J^+-tree to accelerate range scan operations. The J^+-tree stores all the keys in its leaf nodes and keeps the reference values of leaf nodes in a Judy structure, which makes J^+-tree not only hold the advantages of Judy (such as fast single value search) but also outperform it in other aspects. For example, J^+-trees can achieve better performance on range queries than Judy. The pJ^+-tree index exploits prefetching techniques to further improve the cache behavior of J^+-trees and yields a speedup of 2.0 on range scans. Compared with B^+-trees, CSB^+-trees, pB^+-trees and T-trees, our extensive experimental Study shows that pJ^+-trees can provide better performance on both time (search, scan, update) and space aspects. 展开更多
关键词 index structure pJ^+-tree prefetching cache conscious main memory database
原文传递
I/O Acceleration via Multi-Tiered Data Buffering and Prefetching 被引量:2
11
作者 Anthony Kougkas Hariharan Devarajan Xian-He Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第1期92-120,共29页
Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVM... Modern High-Performance Computing(HPC)systems are adding extra layers to the memory and storage hierarchy,named deep memory and storage hierarchy(DMSH),to increase I/O performance.New hardware technologies,such as NVMe and SSD,have been introduced in burst buffer installations to reduce the pressure for external storage and boost the burstiness of modern I/O systems.The DMSH has demonstrated its strength and potential in practice.However,each layer of DMSH is an independent heterogeneous system and data movement among more layers is significantly more complex even without considering heterogeneity.How to efficiently utilize the DMSH is a subject of research facing the HPC community.Further,accessing data with a high-throughput and low-latency is more imperative than ever.Data prefetching is a well-known technique for hiding read latency by requesting data before it is needed to move it from a high-latency medium(e.g.,disk)to a low-latency one(e.g.,main memory).However,existing solutions do not consider the new deep memory and storage hierarchy and also suffer from under-utilization of prefetching resources and unnecessary evictions.Additionally,existing approaches implement a client-pull model where understanding the application's I/O behavior drives prefetching decisions.Moving towards exascale,where machines run multiple applications concurrently by accessing files in a workflow,a more data-centric approach resolves challenges such as cache pollution and redundancy.In this paper,we present the design and implementation of Hermes:a new,heterogeneous-aware,multi-tiered,dynamic,and distributed I/O buffering system.Hermes enables,manages,supervises,and,in some sense,extends I/O buffering to fully integrate into the DMSH.We introduce three novel data placement policies to efficiently utilize all layers and we present three novel techniques to perform memory,metadata,and communication management in hierarchical buffering systems.Additionally,we demonstrate the benefits of a truly hierarchical data prefetcher that adopts a server-push approach to data prefetching.Our evaluation shows that,in addition to automatic data movement through the hierarchy,Hermes can significantly accelerate I/O and outperforms by more than 2x state-of-the-art buffering platforms.Lastly,results show 10%-35%performance gains over existing prefetchers and over 50%when compared to systems with no prefetching. 展开更多
关键词 I/O BUFFERING heterogeneous BUFFERING layered BUFFERING deep memory hierarchy BURST BUFFERS hierarchical data prefetching DATA-CENTRIC architecture
原文传递
Dynamic Data Prefetching in Home-Based Software DSMs 被引量:1
12
作者 胡伟武 张福新 刘海明 《Journal of Computer Science & Technology》 SCIE EI CSCD 2001年第3期231-241,共11页
A major overhead in software DSM (Distributed Shared Memory) is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing. This paper introduces a dynamic prefetching method i... A major overhead in software DSM (Distributed Shared Memory) is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing. This paper introduces a dynamic prefetching method implemented in the JIAJIA software DSM to reduce system overhead caused by remote accesses. The prefetching method records the interleaving string of INV (invalidation) and GETP (getting a remote page) operations for each cached page and analyzes the periodicity of the string when a page is invalidated on a lock or barrier. A prefetching request is issued after the lock or barrier if the periodicity analysis indicates that GETP will be the next operation in the string. Multiple prefetching requests are merged into the same message if they are to the same host. Performance evaluation with eight well-accepted benchmarks in a cluster of sixteen PowerPC workstations shows that the prefetching scheme can significantly reduce the page fault overhead and as a result achieves a performance increase of 15%-20% in three benchmarks and around 8%-10% in another three. The average extra traffic caused by useless prefetches is only 7%-13% in the evaluation. 展开更多
关键词 software DSM remote access prefetching performance evaluation
原文传递
Runtime Engine for Dynamic Profile Guided Stride Prefetching
13
作者 邹琼 李晓峰 章隆兵 《Journal of Computer Science & Technology》 SCIE EI CSCD 2008年第4期633-643,共11页
Stride prefetching is recognized as an important technique to improve memory access performance. The prior work usually profiles and/or analyzes the program behavior offline, and uses the identified stride patterns to... Stride prefetching is recognized as an important technique to improve memory access performance. The prior work usually profiles and/or analyzes the program behavior offline, and uses the identified stride patterns to guide the compilation process by injecting the prefetch instructions at appropriate places. There are some researches trying to enable stride prefetching in runtime systems with online profiling, but they either cannot discover cross-procedural prefetch opportunity, or require special supports in hardware or garbage collection. In this paper, we present a prefetch engine for JVM (Java Virtual Machine). It firstly identifies the candidate load operations during just-in-time (JIT) compilation, and then instruments the compiled code to profile the addresses of those loads. The runtime profile is collected in a trace buffer, which triggers a prefetch controller upon a protection fault. The prefetch controller analyzes the trace to discover any stride patterns, then modifies the compiled code to inject the prefetch instructions in place of the instrumentations. One of the major advantages of this engine is that, it can detect striding loads in any virtual code places for both regular and irregular code, not being limited with plain loop or procedure scopes. Actually we found the cross-procedural patterns take about 30% of all the prefetchings in the representative Java benchmarks. Another major advantage of the engine is that it has runtime overhead much smaller (the maximal is less than 4.0%) than the benefits it brings. Our evaluation with Apache Harmony JVM shows that the engine can achieve an average 6.2% speed-up with SPECJVM98 and DaCapo on Intel Pentium 4 platform, in spite of the runtime overhead. 展开更多
关键词 stride prefetching dynamic profiling runtime system
原文传递
Strip-oriented asynchronous prefetching for parallel disk systems
14
作者 Yang LIU Jian-zhong HUANG +2 位作者 Xiao-dong SHI Qiang CAO Chang-sheng XIE 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2012年第11期799-815,共17页
Sequential prefetching schemes are widely employed in storage servers to mask disk latency and improve system throughput. However, existing schemes cannot benefit parallel disk systems as expected due to the fact that... Sequential prefetching schemes are widely employed in storage servers to mask disk latency and improve system throughput. However, existing schemes cannot benefit parallel disk systems as expected due to the fact that they ignore the distinct internal characteristics of the parallel disk system, in particular, data striping. Moreover, their aggressive prefetching pattern suffers from premature evictions and prolonged request latencies. In this paper, we propose a strip-oriented asynchronous prefetching (SoAP) technique, which is dedicated to the parallel disk system. It settles the above-mentioned problems by providing multiple novel features, e.g., enhanced prediction accuracy, adaptive prefetching strength, physical data layout awareness, and timely prefetching. To validate SoAP, we implement a prototype by modifying the software redundant arrays of inexpensive disks (RAID) under Linux. Experimental results demonstrate that SoAP can consistently offer improved average response time and throughput to the parallel disk system under non-random workloads compared with STEP, SP, ASP, and Linux-like SEQPs. 展开更多
关键词 Parallel disk system STRIP Sequential prefetching Asynchronous scheduling
原文传递
Optimizing the Copy-on-Write Mechanism of Docker by Dynamic Prefetching 被引量:2
15
作者 Yan Jiang Wei Liu +1 位作者 Xuanhua Shi Weizhong Qiang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第3期266-274,共9页
Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container... Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container performs operations such as modification of an image file,a duplicate is created in the upper readwrite layer,which contributes to the runtime overhead.When the accessed image file is fairly large,this additional overhead becomes non-negligible.Here we present the concept of Dynamic Prefetching Strategy Optimization(DPSO),which optimizes the Co W mechanism for a Docker container on the basis of the dynamic prefetching strategy.At the beginning of the container life cycle,DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime.The experimental results show that DPSO has an average prefetch accuracy of greater than 78%in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism. 展开更多
关键词 DOCKER CONTAINER Copy-on-Write(CoW) storage driver prefetch strategy
原文传递
Modeling and application of moderate prefetching strategy based on video slicing for P2P VoD systems 被引量:1
16
作者 DENGGuang-qing WEITing +3 位作者 CHENChang-jia ZHUWei WANGBin WUDeng-rong 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2012年第2期57-66,共10页
In peer-to-peer (P2P) video-on-demand (VoD) streaming systems, each peer contributes a fixed amount of hard disk storage (usually 2 GB) to store viewed videos and then uploads them to other requesting peers. How... In peer-to-peer (P2P) video-on-demand (VoD) streaming systems, each peer contributes a fixed amount of hard disk storage (usually 2 GB) to store viewed videos and then uploads them to other requesting peers. However, the daily hits (namely popularity) of different segments of a video is highly diverse, which means that taking the whole video as the basic storage unit may lead to redundancy of unpopular segment replicas and scarcity of popular segment replicas in the P2P storage network. To address this issue, we propose a video slicing mechanism (VSM) in which the whole video is sliced into small blocks (20 MB, for instance). Under VSM, peers can moderately remove unpopular blocks from and accordingly add popular ones into their contributed hard disk storage, which increases the usage of peers' contributed resource (storage and bandwidth). To reasonably assign bandwidth among peers with different download capacity, we propose a moderate prefetching strategy (MPS) based on VSM. Under MPS, when the amount of prefetched content reaches the predefined threshold, peers immediately stop prefetching video content and then release occupied bandwidth for others. A stochastic model is established to analyze the performance of the MPS and it is found that perfect playback continuity can be got under MPS. Then the MPS is applied to PPLive VoD system (one of the largest P2P VoD systems in China) and measurement results demonstrate that low server load and perfect user satisfaction can be achieved. Also, the server bandwidth contribution of PPLive VoD system under MPS (namely 5%) is much lower than that of UUSee VoD system (namely 30%). 展开更多
关键词 BANDWIDTH P2P VOD SLICING prefetch
原文传递
基于RAID的适度贪婪并行预取技术 被引量:2
17
作者 吴志刚 冯丹 张江陵 《计算机工程》 CAS CSCD 北大核心 2003年第18期164-165,176,共3页
Prefetching(预取)技术是在计算机体系设计中为提高系统性能而通常采用的一项重要技术。在RAID(廉价冗余磁盘阵列)系统中采用有效的预取技术可以缩短主机读请求的平均响应时间,提高磁盘阵列的数据吞吐率。在分析了一些主要应用模型的... Prefetching(预取)技术是在计算机体系设计中为提高系统性能而通常采用的一项重要技术。在RAID(廉价冗余磁盘阵列)系统中采用有效的预取技术可以缩短主机读请求的平均响应时间,提高磁盘阵列的数据吞吐率。在分析了一些主要应用模型的数据请求特性的基础上,实现了一种适度贪婪的并行预取算法,实验证明该预取技术对主机的连续大量数据读请求是十分有效的。 展开更多
关键词 磁盘阵列 缓存 预取 命中率 prefetching技术 RAID
下载PDF
Windows环境下可执行文件操作痕迹分析方法 被引量:2
18
作者 罗文华 《刑事技术》 2013年第4期61-63,共3页
电子数据取证(特别是针对恶意程序进行调查)实践中,可执行文件操作信息在揭示犯罪分子行为细节方面发挥着重要作用。通常,可执行文件操作痕迹涵盖可执行文件名称、功能、位置、运行次数、运行时问、操作用户等多种信息。依靠Windows... 电子数据取证(特别是针对恶意程序进行调查)实践中,可执行文件操作信息在揭示犯罪分子行为细节方面发挥着重要作用。通常,可执行文件操作痕迹涵盖可执行文件名称、功能、位置、运行次数、运行时问、操作用户等多种信息。依靠Windows操作系统自带的系统命令或工具无法深入完整的挖掘出上述信息,因此针对此类痕迹分析方法的研究具有非常积极而重要的意义。 展开更多
关键词 可执行文件 操作痕迹 运行次数 最后运行时间 Prefetch文件夹 UserAssist表键
下载PDF
Research on data pre-deployment in information service flow of digital ocean cloud computing
19
作者 SHI Suixiang XU Lingyu +4 位作者 DONG Han WANG Lei WU Shaochun QIAO Baiyou WANG Guoren 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2014年第9期82-92,共11页
Data pre-deployment in the HDFS (Hadoop distributed file systems) is more complicated than that in traditional file systems. There are many key issues need to be addressed, such as determining the target location of... Data pre-deployment in the HDFS (Hadoop distributed file systems) is more complicated than that in traditional file systems. There are many key issues need to be addressed, such as determining the target location of the data prefetching, the amount of data to be prefetched, the balance between data prefetching services and normal data accesses. Aiming to solve these problems, we employ the characteristics of digital ocean information service flows and propose a deployment scheme which combines input data prefetching with output data oriented storage strategies. The method achieves the parallelism of data preparation and data processing, thereby massively reducing I/O time cost of digital ocean cloud computing platforms when processing multi-source information synergistic tasks. The experimental results show that the scheme has a higher degree of parallelism than traditional Hadoop mechanisms, shortens the waiting time of a running service node, and significantly reduces data access conflicts. 展开更多
关键词 HDFS data prefetching cloud computing service flow digital ocean
下载PDF
Windows10中Prefetch文件的变化及对取证分析的影响
20
作者 张俊 朱勇宇 《警察技术》 2021年第5期67-70,共4页
预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹... 预读取,是Windows用来提高操作系统和应用程序启动性能的一项重要机制。Windows通过Prefetch文件在系统和应用程序启动前将所需的文件提前缓存到内存中,从而实现这一机制,因此Prefetch文件中记录了大量有关应用程序运行的痕迹,这些痕迹便是宝贵的电子数据证据。相较于其它版本的Prefetch文件,Windows10中Prefetch文件的结构和功能发生了较大的变化,但是其相关的研究和解析工作相对较少。主要针对Windows10中Prefetch文件的结构和功能进行分析,并进一步阐述Prefetch文件在电子数据取证中的重要作用。 展开更多
关键词 Prefetch 文件结构 Windows10取证
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部