期刊文献+

Ceph存储中基于温度因子的CRUSH算法改进 被引量:2

An Improved CRUSH Algorithm based on Temperature factor in Ceph Storage
下载PDF
导出
摘要 针对Ceph存储系统CRUSH算法对高相关性的小文件数据对象落入同一个存储节点的问题,提出基于温度因子的CRUSH改进算法。将改进算法和原始算法进行实验对比,结果验证算法的有效性,所提出的算法能够通过计算用户写请求访问集群中某个节点的频率,动态增加该节点的温度因子,利用温度因子对原始CRUSH算法进行加权运算,得出更适合的存储节点。研究结果表明,改进的CRUSH算法能有效解决对小文件存储所引起的负载均衡问题;避免造成单一节点I/O繁忙和网络拥堵并不会影响整体集群的负载均衡。 In Ceph storage,an improved CRUSH algorithm with Temperature factor,is proposed to resolve the problem that high correlation small data objects fall into single storage node. Based on the improved algorithm and the original algorithm experimental contrast,results demonstrate the effectiveness of the improved algorithm. The proposed algorithm can calculate user write- request frequency of a node in the cluster,and increase the node's temperature dynamically,and use the temperature factor on the original CRUSH weighted,calculate more suitable storage nodes. Research indicates that improved CRUSH algorithm can effectively resolve backend storage I / O busy problems caused by the storage of small files and does not affect the load- balanced of entire cluster.
出处 《成都信息工程学院学报》 2015年第6期563-567,共5页 Journal of Chengdu University of Information Technology
基金 省科技厅科技支撑计划资助项目(2012SZ0070)
关键词 计算机应用技术 分布式存储 Ceph CRUSH 负载均衡 computer applications distributed storage ceph CRUSH load balancing
  • 相关文献

参考文献10

  • 1Sage A Weil, Scott A Brandt, Ethan L Miller, et al. Ceph : A scalable, high - performance distributed file system[ C]. In Proceedings of the 7th Sympo- sium on Operating Systems Design and Implemen- tation, Seattle, WA, November 2006.
  • 2Sage A Weil. Ceph: Reliable, Scalable, and High -Performance Distributed Storage [ D]. University.of California. Santa Cruz. 2007.
  • 3詹明非.软件定义存储技术及其应用研究[J].电信技术,2014,0(12):30-32. 被引量:9
  • 4Sage A Weil, Scott A Brandt, Ethan L Miller, et al. CRUSH : Controlled, scalable, decentralized placement of replicated data [ C ]. In Proceedings of the 2006 ACM/IEEE Conference on Supercom- puting (SC'06), Tampa, FL, November 2006.
  • 5Wenjun Huang, Shanping Li. Research on In, tegrating Openstack with Ceph [ D ]. Zhejiang : U- niversity of Zhejiang, 2014.
  • 6Lamport,Leslie. Time, Clocks and the Ordering of Events in a Distributed System [ J ]. Communica- tions of the ACM,2007,21 (7): 558 - 565.
  • 7Sage A, Weil Andrew W, Leung Scott A, et al. RADOS: A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters[ C]. University of California, Santa Cruz, 2007.
  • 8Jenkins R J. Hash functions for hash table lookup [ Z]. http ://burtleburtle. net/bob/hash/evahash. html, 1997.
  • 9D F Schmidt, E Makalic. Universal Models for the Exponential Distribution [ J ]. IEEE Transac- tions on Information Theory, 2009,55 (7).
  • 10Li Wang KylinCloud Team. Optimizations on Ceph Cache Tiering [ R ]. Conference on Ceph Day, 2014.

共引文献8

同被引文献8

引证文献2

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部