摘要
通过大量的实验分析发现:在云桌面场景下,数据拥有者之间的工作相关度越大,则该用户之间存在重复数据的概率越大.基于该实验结果,提出了用户感知的重复数据删除算法.该算法打破了数据空间局部性特征的限制,实现了以用户为单位的更粗粒度的查重计算,可以在不影响重删率的前提下,减少5-10倍常驻内存指纹的数量,并可将每次查重计算的指纹检索范围控制在一个常数范围内,不随数据总量的增加而线性增加,从而有效避免了因为数据总量增加而导致内存不足的问题.除此之外,该算法还能根据存储系统的负载情况自动调整重复指纹检索范围,在性能与重删率之间加以平衡,从而更好地满足主存储场景的需要.原型验证表明,该算法可以很好地解决云计算场景下海量数据的重复数据删除性能问题.与Open Dedup算法相比,当数据指纹总量超出内存可用空间时,该算法可以表现出巨大的优势,减少200%以上的读磁盘操作,响应速度提升3倍以上.
By doing a lot of experiments, if two users have more cross-project then they will own more duplication data at a virtual desktop instrument system. So, according to this finding, this paper proposes a user-aware de-duplication algorithm. This algorithm breaks the rule of data locality and can work at the new rule of user locality. According to the new rule, it just need load one user's finger print data into memory for each user group. So it can reduce 5x-10x memory requirements than other algorithm and it can control the searching scope in a limited number for each checking besides. So this algorithm can avoid a lot of read I/O operations. Meanwhile, this algorithm can adjust the searching scope dynamically according to the current workload of VDI system. Because it always tries to get the best de-duplication rate but not affect the response time of VDI system. The prototype experimental results show that it can improve the performance of de-duplication algorithm, especially when it used in a massive data storage system. Compared with OpenDedup, the algorithm can reduce more than 200% read I/O operations and can accelerate the response time more than 3x fast when the finger print data is bigger than available memory.
出处
《软件学报》
EI
CSCD
北大核心
2015年第10期2581-2595,共15页
Journal of Software
基金
国家自然科学基金(61272454)
高等学校博士学科点专项科研基金(20130141110022)
关键词
重复数据删除
云计算
虚拟桌面云
I/O性能瓶颈
数据局部性
data deduplication
cloud computing
virtual desktop instrument
I/O performance bottleneck
data locality