期刊文献+

基于区块链的多代理联合去重方案

Multi-brokerage Joint Deduplication Scheme Based on Blockchain
下载PDF
导出
摘要 随着多云存储市场的快速发展,越来越多的用户选择将数据存储在云上,随之而来的是云环境中的重复数据也呈爆炸式增长.由于云服务代理是相互独立的,因此传统的数据去重只能消除代理本身管理的几个云服务器上的冗余数据.为了进一步提高云环境中数据去重的力度,本文提出了一种多代理联合去重方案.通过区块链技术促成云服务代理间的合作,并构建代理联盟,将数据去重的范围从单个代理管理的云扩大到多代理管理的多云.同时,能够为用户、云服务代理和云服务提供商带来利益上的共赢.实验表明,多代理联合去重方案可以显著提高数据去重效果、节约网络带宽. With the rapid development of the multi-cloud storage market, more and more users choose to store data in the cloud, followed by the explosive growth of duplicate data in the cloud environment. Because cloud service brokerages are independent of each other, traditional data deduplication can only eliminate redundant data on several cloud servers managed by the brokerages themselves. To further improve the data deduplication in a cloud environment, this study proposes a multi-brokerage joint deduplication scheme. Through the blockchain technology to promote the cooperation between cloud service brokerages and the construction of a brokerage alliance, the scope of data deduplication is extended from single brokerage managed clouds to multiple brokerages managed clouds. At the same time, it can bring win-win benefits to users, cloud service brokerages and cloud service providers. Experiments show that the multi-brokerage joint deduplication scheme can significantly improve the data deduplication effect and save network bandwidth.
作者 张亚男 陈卫卫 付印金 徐堃 ZHANG Ya-Nan;CHEN Wei-Wei;FU Yin-Jin;XU Kun(School of Command and Control Engineering,Army Engineering University,Nanjing 210007,China)
出处 《计算机系统应用》 2022年第6期86-92,共7页 Computer Systems & Applications
基金 江苏省自然科学基金(BK20191327)。
关键词 多云存储 数据去重 区块链 云服务代理 云计算 智能合约 multi-cloud storage data deduplication Blockchain cloud service brokerage(CSB) cloud computing smart contract
  • 相关文献

参考文献4

二级参考文献119

  • 1王飞跃.人工社会、计算实验、平行系统——关于复杂社会经济系统计算研究的讨论[J].复杂系统与复杂性科学,2004,1(4):25-35. 被引量:236
  • 2Bhagwat D,Pollack K,Long DDE,Schwarz T,Miller EL,P-ris JF.Providing high reliability in a minimum redundancy archival storage system.In:Proc.of the 14th Int'l Symp.on Modeling,Analysis,and Simulation of Computer and Telecommunication Systems (MASCOTS 2006).Washington:IEEE Computer Society Press,2006.413-421.
  • 3Zhu B,Li K.Avoiding the disk bottleneck in the data domain deduplication file system.In:Proc.of the 6th Usenix Conf.on File and Storage Technologies (FAST 2008).Berkeley:USENIX Association,2008.269-282.
  • 4Bhagwat D,Eshghi K,Mehra P.Content-Based document routing and index partitioning for scalable similarity-based searches in a large corpus.In:Berkhin P,Caruana R,Wu XD,Gaffney S,eds.Proc.of the 13th ACM SIGKDD Int'l Conf.on Knowledge Discovery and Data Mining (KDD 2007).New York:ACM Press,2007.105-112.
  • 5You LL,Pollack KT,Long DDE.Deep store:An archival storage system architecture.In:Proc.of the 21st Int'l Conf.on Data Engineering (ICDE 2005).Washington:IEEE Computer Society Press,2005.804-815.
  • 6Quinlan S,Dorward S.Venti:A new approach to archival storage.In:Proc.of the 1st Usenix Conf.on File and Storage Technologies (FAST 2002).Berkeley:USENIX Association,2002.89-102.
  • 7Sapuntzakis CP,Chandra R,Pfaff B,Chow J,Lam MS,Rosenblum M.Optimizing the migration of virtual computers.In:Proc.of the 5th Symp.on Operating Systems Design and Implementation (OSDI 2002).New York:ACM Press,2002.377-390.
  • 8Rabin MO.Fingerprinting by random polynomials.Technical Report,CRCT TR-15-81,Harvard University,1981.
  • 9Rivest R.The MD5 message-digest algorithm.1992.http://www.python.org/doc/current/lib/module-md5.html.
  • 10U.S.National Institute of Standards and Technology (NIST).Federal Information Processing Standards (FIPS) Publication 180-1:Secure Hash Standard.1995.http://www.itl.nist.gov/fipspubs/fip180-1.htm.

共引文献400

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部