期刊文献+

基于压缩近邻的查重元数据去冗算法设计 被引量:3

Deduplication algorithm based on condensed nearest neighbor rule for deduplication metadata
下载PDF
导出
摘要 随着重复数据删除次数的增加,系统中用于存储指纹索引的清单文件等元数据信息会不断累积,导致不可忽视的存储资源开销。因此,如何在不影响重复数据删除率的基础上,对重复数据删除过程中产生的元数据信息进行压缩,从而减小查重索引,是进一步提高重复数据删除效率和存储资源利用率的重要因素。针对查重元数据中存在大量冗余数据,提出了一种基于压缩近邻的查重元数据去冗算法Dedup2。该算法先利用聚类算法将查重元数据分为若干类,然后利用压缩近邻算法消除查重元数据中相似度较高的数据以获得查重子集,并在该查重子集上利用文件相似性对数据对象进行重复数据删除操作。实验结果表明,Dedup2可以在保持近似的重复数据删除比的基础上,将查重索引大小压缩50%以上。 Building effective deduplication index in the memory could reduce disk access times and enhance chunk fin- gerprint lookup speed, which was a big challenge for deduplication algorithms in massive data environments. As dedu- plication data set had many samples with high similarity, a deduplication algorithm based on condensed nearest neighbor rule, which was called Dedup2, was proposed. Dedup2 uses clustering algorithrn to divide the original deduplication metadata into several categories. According to these categories, it employs condensed nearest neighbor rule to remove the highest similar data in the deduplieation metadata. After that it can get the subset of deduplication metadata. Based on this subset, new data ob- jects will be deduplicated based on the principle of data similarity. The results of experiments show that Dedup2 can reduce the size of deduplication data set more than 50% effectively while maintain similar deduplication ratio.
出处 《通信学报》 EI CSCD 北大核心 2015年第8期1-7,共7页 Journal on Communications
基金 国家自然科学基金资助项目(61370069) 国家高技术研究发展计划("863"计划)基金资助项目(2012AA012600) 中央高校基本科研业务费专项基金资助项目(BUPT2011RCZJ16)~~
关键词 重复数据删除 查重元数据 近邻压缩规则 deduplication deduplication metadata condensed nearest neighbor rule
  • 相关文献

参考文献23

  • 1ZHU B, LI K, PATTERSON H. Avoiding the disk bottleneck in the data domain deduplication file system[A]. Proceedings of the 6th USENIX Conference on File and Storage Technologies, USENIX As- sociation[C]. 2008,1-14.
  • 2LILLIBRIDGE M, ESHGHI K, BHAGWAT D, et aL Sparse indexing: large scale, inline deduplication using sampling and locality[A]. Proc- eedings of the 7th Conference on File and Storage Technologies, USENIX Association[C]. 2009. 111-123.
  • 3BHAGWAT D, ESHGHI K, LONG D, et al. Extreme binning: scalable, parallel deduplication for chunk-based file backup[A]. In Modeling, Analysis & Simulation of Computer and Telecommunication Systems, IEEE International Symposium[C]. IEEE, 2009,1-9.
  • 4XIA W, JIANG H, FENG D, et al. SiLo: a similarity-locality based near-exact deduplication scheme with low RAM overhead and high throughput[A]. Proceedings of the 2011 USENIX Annual Technical Conference (ATC), USENIX Association[C],2011,26-28.
  • 5ARONOVICH L, ASHER R, BACHMAT E, et al. The design of a similar- ity based deduplication system[A]. Proceedings of SYSTOR 2009, The Is- raeli Experimental Systems Conference[C]. ACM, 2009. 1-14.
  • 6ROMAIQSK1 B, HELDT L, KILIAN W, et al. Anchor-driven sub- chunk deduplication[A]. Proceedings of the 4th Annual International Conference on Systems and Storage[C]. 201 l. 16-28.
  • 7ZHANG Z, BHAGWAT D, LITWIN W, et al. Improved deduplication through parallel binning[A]. Performance Computing and Communications Conference (IPCCC), 2012 IEEE 31st International[C]. 2012. 130-141.
  • 8DOUGLIS F, IYENGAR A. Application-specific deltaencoding via resemblance detection[A]. Proceedings of the 2003 USENIX Annual Technical Conference[C]. San Antonio, Texas, 2003. 113-126.
  • 9BRODER A Z, MITZENMACHER M. Network applications of Bloom filters: a survey[J]. Interact Mathematics, 2004, 1(4): 485-509.
  • 10TAN L J, YAO W B, LIU Z Y. et aL CDFS: a cloud-based deduplica- tion filesystem[J]. Advanced Science Letters, American Scientific Publishers, 2012, 9(1): 855-860.

同被引文献16

引证文献3

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部