期刊文献+

数据中心间空闲带宽感知的内容分发算法 被引量:3

Leftover bandwidth-aware peer selection algorithm for inter-datacenter content distribution
下载PDF
导出
摘要 针对数据中心链路上存在时间窗不重叠的空闲带宽的情况,提出了利用该带宽分发容迟数据的基本思路,进而设计了一种分布式可扩展的空闲带宽感知的节点选择算法LBAPS,该算法避免了集中优化,适合目标节点较多的情况。为了匹配最优的带宽空闲节点,LBAPS按综合度量进行节点选择;为了优先把文件块上传到空闲带宽大的节点以及尽早把不同的块分布到更多节点,LBAPS按阈值预留资源以及按时间片退出上传。基于LBAPS实现了内容云原型系统P2PStitcher。PlanetLab上的实验表明,LBAPS算法所提出的策略可以有效地减少平均分发时间。 Due to the fact that leftover bandwidth appears during non-overlapping time intervals, an approach of using such bandwidth to distribute delay tolerant data was proposed, and then a distributs and scalable leftover band- width-aware peer selection algorithm named LBAPS was designed. LBAPS avoids centralized optimization method that fails to effectively utilize leftover bandwidth when multiple destinations occur. In LBAPS, a node selection strategy based on synthetical evaluation was presented in order to find appropriate nodes with leftover bandwidth currently. In addition, two other strategies, i.e., resource reservation based on threshold and exiting upload upon the length of time slice, were put forward. With these two strategies, nodes with more leftover bandwidth get higher priority to obtain file blocks; be- sides, different file blocks can be delivered to different nodes as soon as POssible. On the basis of LBAPS, a content cloud prototype, P2PStitcher was implemented. Experimental results on PlanetlLab show that the strategies proposed in LBAPS are effective to decrease the average delivery time.
出处 《通信学报》 EI CSCD 北大核心 2013年第7期24-33,共10页 Journal on Communications
基金 国家高技术研究发展计划("863"计划)基金资助项目(2013AA013503) 国家自然科学基金资助项目(61272532) 江苏省自然科学基金资助项目(BK2011335)~~
关键词 内容云 P2P CDN 平均分发时间 PLANETLAB content cloud P2P CDN average delivery time PlanetLab
  • 相关文献

参考文献18

  • 1Amazon simple storage service(S3) [EB/OL]. http:/J 2012. ,2012.
  • 2CloudFront [EB/OL]. http://aws.amazon.com/cloudfxont/, 2012.
  • 3GREENBERG A, HAMILTON J, MALTZ D A, et al. The cost of a cloud: research problems in data center networks[J]. ACM SIGCOMMComputer Communication Review, 2008, 39(1):68-73.
  • 4LAOUTARIS N, SIRIVIANOS M, YANG X, eta/. Inter- datacenter bulk transfers with netstitcher[J]. ACM SIGCOMM Computer Communication Review, 2011, 41(4):74-85.
  • 5GOLDENBERG D K, QIUY L, XlE H, et al. Opfinft:ng cost and perfomaance f: mult:oming[J]. ACM SIGCOMM Comput: Commtmicati:a Review, 2004, 34(4):79-92.
  • 6ISP bandwidth billing-how to make more or pay less[EB/OL], http:// servicelevel.net/ratingmatters/newsletters/issue 13.htm, 2012.
  • 7LAOUTAR/S N, SMARAGDAKIS C: RODRIGUEZ P, et al. Delay tolerant bulk data transfers on the intemet[A]. Proceedings of the Eleventh International Joint Conference on Measurement and Modeling of Computer Systems[C]. Seattle, WA, USA, 2009.229-238.
  • 8SHERWOOD R, BRAUD R, BHATTACHARJEE B. Slurpie: a cooperative bulk data transfer protocol[A]. IEEE INFOCOM[C]. HongKong, China, 2004.941-951.
  • 9P2PStitcher [EB/OL]. http:H 2012. PlanetLab [EB/OL]. https://www.planet-lab.org/, 2012.
  • 10NYGREN E, SITARAMAN R K, SUN J. The akamal network: a platform for high-performance interact applications[J]. SIGOPS Operation Systems Review, 2010, 44(3):2-19.

同被引文献26

  • 1Bari M F, Boutaba R, Esteves R, et al. Data center network virtualization: A survey [J]. IEEE Communications Surveys Tutorials, 2013 15(2): 909-928.
  • 2Mahimkar A, Chiu A, Doverspike R, et al. Bandwidth on demand for inter data center communication [C]/]Proc of the 10th ACM Workshop on Hot Topics in Networks. New York: ACM, 2011: 24,1-24.- 6.
  • 3Greenberg A, Hamilton J, Maltz D A, et al. The cost of a cloud Research problems in data center networks [J]. ACM SIGCOMM Computer Communication Review, 2008, 39 (1) : 68-73.
  • 4Google. Inter-datacenter WAN with centralized SDN and OpenFlow EE www. opennetworking googlesdn, pdf [B/OL]. 201212014-12-22- org/images/stories/downl.
  • 5Jain S, Kumar A, Mandal globally-deployed software ACM SIGCOMM 2013. Ne TE using https// oads/misc/ S, et al. B4: Experience with a defined WAN [C] //Proc of the w York: ACM, 2013:3-14.
  • 6Laoutaris N, Sirivianos M, Yang X, et al. Intedatacenter bulk transfers with NetStitcher [J]. ACM SIGCOMM Computer Communication Review, 2011, 41(4): 74-85.
  • 7Thyaga N, Krishna P N P. Lowering inter-datacenter bandwidth costs via hulk data scheduling [C] /[Proc of the 12th IEEE/ACM Int Symp on Cluster, Cloud and Grid Computing. New York ACM, 2012.. 244-251.
  • 8Ghosh A, Ha S, Crabbe E et al, Scalable multi class traffic management in data center backbone networks [J]. IEEE Journal on Selected Areas in Communications, 2013, 31 (12) : 2673-2684.
  • 9Danna E, Hassidim A, Kaplan H, et al. Upward max min fairness [C] //Proe of the IEEE Int Conf on Computer Communications ( INFOCOM 2012 ). Piscataway, NJ IEEE, 2012:837-845.
  • 10Osborne M J, Rubinste A. A Course in Game Theory [M]. Cambridge, MA: M1T Press, 1994:12-30.

引证文献3

二级引证文献21

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部