期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
WATuning:A Workload-Aware Tuning System with Attention-Based Deep Reinforcement Learning 被引量:1
1
作者 Jia-Ke Ge Yan-Feng chai yun-peng chai 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第4期741-761,共21页
Configuration tuning is essential to optimize the performance of systems(e.g.,databases,key-value stores).High performance usually indicates high throughput and low latency.At present,most of the tuning tasks of syste... Configuration tuning is essential to optimize the performance of systems(e.g.,databases,key-value stores).High performance usually indicates high throughput and low latency.At present,most of the tuning tasks of systems are performed artificially(e.g.,by database administrators),but it is hard for them to achieve high performance through tuning in various types of systems and in various environments.In recent years,there have been some studies on tuning traditional database systems,but all these methods have some limitations.In this article,we put forward a tuning system based on attention-based deep reinforcement learning named WATuning,which can adapt to the changes of workload characteristics and optimize the system performance efficiently and effectively.Firstly,we design the core algorithm named ATT-Tune for WATuning to achieve the tuning task of systems.The algorithm uses workload characteristics to generate a weight matrix and acts on the internal metrics of systems,and then ATT-Tune uses the internal metrics with weight values assigned to select the appropriate configuration.Secondly,WATuning can generate multiple instance models according to the change of the workload so that it can complete targeted recommendation services for different types of workloads.Finally,WATuning can also dynamically fine-tune itself according to the constantly changing workload in practical applications so that it can better fit to the actual environment to make recommendations.The experimental results show that the throughput and the latency of WATuning are improved by 52.6%and decreased by 31%,respectively,compared with the throughput and the latency of CDBTune which is an existing optimal tuning method. 展开更多
关键词 attention mechanism auto-tuning system reinforcement learning(RL) workload-aware
原文传递
Endurable SSD-Based Read Cache for Improving the Performance of Selective Restore from Deduplication Systems
2
作者 Jian Liu yun-peng chai +1 位作者 Xiao Qin Yao-Hong Liu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第1期58-78,共21页
Deduplication has been commonly used in both enterprise storage systems and cloud storage. To overcome the performance challenge for the selective restore operations of deduplication systems, solid-state-drive-based ... Deduplication has been commonly used in both enterprise storage systems and cloud storage. To overcome the performance challenge for the selective restore operations of deduplication systems, solid-state-drive-based (i.e., SSD-based) re^d cache cm, be deployed for speeding up by caching popular restore contents dynamically. Unfortunately, frequent data updates induced by classical cache schemes (e.g., LRU and LFU) significantly shorten SSDs' lifetime while slowing down I/O processes in SSDs. To address this problem, we propose a new solution -- LOP-Cache to greatly improve tile write durability of SSDs as well as I/O performance by enlarging the proportion of long-term popular (LOP) data among data written into SSD-based cache. LOP-Cache keeps LOP data in the SSD cache for a long time period to decrease the number of cache replacements. Furthermore, it prevents unpopular or unnecessary data in deduplication containers from being written into the SSD cache. We implemented LOP-Cache in a prototype deduplication system to evaluate its pertbrmance. Our experimental results indicate that LOP-Cache shortens the latency of selective restore by an average of 37.3% at the cost of a small SSD-based cache with only 5.56% capacity of the deduplicated data. Importantly, LOP-Cache improves SSDs' lifetime by a factor of 9.77. The evidence shows that LOP-Cache offers a cost-efficient SSD-based read cache solution to boost performance of selective restore for deduplication systems. 展开更多
关键词 data deduplication solid state drive (SSD) flash CACHE ENDURANCE
原文传递
MacroTrend: A Write-Efficient Cache Algorithm for NVM-Based Read Cache
3
作者 Ning Bao yun-peng chai +1 位作者 Xiao Qin Chuan-Wen Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第1期207-230,共24页
The future storage systems are expected to contain a wide variety of storage media and layers due to the rapid development of NVM(non-volatile memory)techniques.For NVM-based read caches,many kinds of NVM devices cann... The future storage systems are expected to contain a wide variety of storage media and layers due to the rapid development of NVM(non-volatile memory)techniques.For NVM-based read caches,many kinds of NVM devices cannot stand frequent data updates due to limited write endurance or high energy consumption of writing.However,traditional cache algorithms have to update cached blocks frequently because it is difficult for them to predict long-term popularity according to such limited information about data blocks,such as only a single value or a queue that reflects frequency or recency.In this paper,we propose a new MacroTrend(macroscopic trend)prediction method to discover long-term hot blocks through blocks'macro trends illustrated by their access count histograms.And then a new cache replacement algorithm is designed based on the MacroTrend prediction to greatly reduce the write amount while improving the hit ratio.We conduct extensive experiments driven by a series of real-world traces and find that compared with LRU,MacroTrend can reduce the write amounts of NVM cache devices significantly with similar hit ratios,leading to longer NVM lifetime or less energy consumption. 展开更多
关键词 non-volatile memory(NVM) solid state disk(SSD) CACHE ENDURANCE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部