Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In th...Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In the data center network(DCN),short and long flows are sensitive to low latency and high throughput,respectively.The traditional security processing approaches,however,neglect these characteristics and requirements.This paper proposes a fine-grained security enhancement mechanism(SEM)to solve the problem of heterogeneous traffic and reduce the traffic completion time(FCT)of short flows while ensuring the security of multi-tenant traffic transmission.Specifically,for short flows in DCN,the lightweight GIFT encryption method is utilized.For Intra-DCN long flows and Inter-DCN traffic,the asymmetric elliptic curve encryption algorithm(ECC)is utilized.The NS-3 simulation results demonstrate that SEM dramatically reduces the FCT of short flows by 70%compared to several conventional encryption techniques,effectively enhancing the security and anti-attack of traffic transmission between DCNs in cloud computing environments.Additionally,SEM performs better than other encryption methods under high load and in largescale cloud environments.展开更多
Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Eq...Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Equal-Cost Multi-Path(ECMP),the canonical solution in practice,and alternatives come with performance limitations or significant deployment challenges.In this work,we propose Closer,a scalable load balancing mechanism for cloud datacenters.Closer complies with the evaluation of technology including the deployment of Clos-based topologies,overlays for network virtualization,and virtual machine(VM)clusters.We decouple the system into centralized route calculation and distributed route decision to guarantee its flexibility and stability in large-scale networks.Leveraging In-band Network Telemetry(INT)to obtain precise link state information,a simple but efficient algorithm implements a weighted ECMP at the edge of fabric,which enables Closer to proactively map the flows to the appropriate path and avoid the excessive congestion of a single link.Closer achieves 2 to 7 times better flow completion time(FCT)at 70%network load than existing schemes that work with same hardware environment.展开更多
Big data analytics, the process of organizing and analyzing data to get useful information, is one of the primary uses of cloud services today. Traditionally, collections of data are stored and processed in a single d...Big data analytics, the process of organizing and analyzing data to get useful information, is one of the primary uses of cloud services today. Traditionally, collections of data are stored and processed in a single datacenter. As the volume of data grows at a tremendous rate, it is less efficient for only one datacenter to handle such large volumes of data from a performance point of view. Large cloud service providers are deploying datacenters geographically around the world for better performance and availability. A widely used approach for analytics of gee-distributed data is the centralized approach, which aggregates all the raw data from local datacenters to a central datacenter. However, it has been observed that this approach consumes a significant amount of bandwidth, leading to worse performance. A number of mechanisms have been proposed to achieve optimal performance when data analytics are performed over geo-distributed datacenters. In this paper, we present a survey on the representative mechanisms proposed in the literature for wide area analytics. We discuss basic ideas, present proposed architectures and mechanisms, and discuss several examples to illustrate existing work. We point out the limitations of these mechanisms, give comparisons, and conclude with our thoughts on future research directions.展开更多
Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
Although dense interconnection datacenter networks(DCNs)(e.g.,Fat Tree) provide multiple paths and high bisection bandwidth for each server pair,the widely used single-path Transmission Control Protocol(TCP)and equal-...Although dense interconnection datacenter networks(DCNs)(e.g.,Fat Tree) provide multiple paths and high bisection bandwidth for each server pair,the widely used single-path Transmission Control Protocol(TCP)and equal-cost multipath(ECMP) transport protocols cannot achieve high resource utilization due to poor resource excavation and allocation.In this paper,we present LESSOR,a performance-oriented multipath forwarding scheme to improve DCNs' resource utilization.By adopting an Open Flow-based centralized control mechanism,LESSOR computes near-optimal transmission path and bandwidth provision for each flow according to the global network view while maintaining nearly real-time network view with the performance-oriented flow observing mechanism.Deployments and comprehensive simulations show that LESSOR can efficiently improve the network throughput,which is higher than ECMP by 4.9%–38.3% under different loads.LESSOR also provides 2%–27.7% improvement of throughput compared with Hedera.Besides,LESSOR decreases the average flow completion time significantly.展开更多
Currently, different kinds of security devices are deployed in the cloud datacenter environment and tenants may choose their desired security services such as firewall and IDS (intrusion detection system). At the sa...Currently, different kinds of security devices are deployed in the cloud datacenter environment and tenants may choose their desired security services such as firewall and IDS (intrusion detection system). At the same time, tenants in cloud computing datacenters are dynamic and have different requirements. Therefore, security device deployment in cloud datacenters is very complex and may lead to inefficient resource utilization. In this paper, we study this problem in a software-defined network (SDN) based multi-tenant cloud datacenter environment. We propose a load-adaptive traffic steering and packet forwarding scheme called LTSS to solve the problem. Our scheme combines SDN controller with TagOper plug-in to determine the traffic paths with the minimum load for tenants and allows tenants to get their desired security services in SDN-based datacenter networks. We also build a prototype system for LTSS to verify its functionality and evaluate performance of our design.展开更多
Bursty traffic and thousands of concurrent flows incur inevitable network congestion in datacenter networks(DCNs)and then affect the overall performance.Various transport protocols are developed to mitigate the networ...Bursty traffic and thousands of concurrent flows incur inevitable network congestion in datacenter networks(DCNs)and then affect the overall performance.Various transport protocols are developed to mitigate the network congestion,including reactive and proactive protocols.Reactive schemes use different congestion signals,such as explicit congestion notification(ECN)and round trip time(RTT),to handle the network congestion after congestion arises.However,with the growth of scale and link speed in datacenters,reactive schemes encounter a significant problem of slow responding to congestion.On the contrary,proactive protocols(e.g.,credit-reservation protocols)are designed to avoid congestion before it occurs,and they have the advantages of zero data loss,fast convergence and low buffer occupancy.But credit-reservation protocols have not been widely deployed in current DCNs(e.g.,Microsoft,Amazon),which mainly deploy ECN-based protocols,such as data center transport control protocol(DCTCP)and data center quantized congestion notification(DCQCN).And in an actual deployment scenario,it is hard to guarantee one protocol to be deployed in every server at one time.When credit-reservation protocol is deployed to DCNs step by step,the network will be converted to multi-protocol state and will face the following fundamental challenges:1)unfairness,2)high buffer occupancy,and 3)heavy tail latency.Therefore,we propose Harmonia,aiming for converging ECN-based and credit-reservation protocols to fairness with minimal modification.To the best of our knowledge,Harmonia is the first to address the trouble of harmonizing proactive and reactive congestion control.Targeting the common ECN-based protocols-DCTCP and DCQCN,Harmonia leverages forward ECN and RTT to deliver real-time congestion information and redefines feedback control.After the evaluation,the results show that Harmonia effectively solves the unfair link allocation,eliminating the timeouts and addressing the buffer overflow.展开更多
Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially l...Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially located in different datacenters,thereby resulting in huge delays during data transmis-sion.Edge computing minimizes the delays in data transmission and supports the fixed storage strategy for scientific workflow private datasets.Therefore,this fixed storage strategy creates huge amount of bottleneck in its storage capacity.At this juncture,integrating the merits of cloud computing and edge computing during the process of rationalizing the data placement of scientific workflows and optimizing the energy and time incurred in data transmission across different datacentres remains a challenge.In this paper,Adaptive Cooperative Foraging and Dispersed Foraging Strategies-Improved Harris Hawks Optimization Algorithm(ACF-DFS-HHOA)is proposed for optimizing the energy and data transmission time in the event of placing data for a specific scientific workflow.This ACF-DFS-HHOA considered the factors influencing transmission delay and energy consumption of data centers into account during the process of rationalizing the data placement of scientific workflows.The adaptive cooperative and dispersed foraging strategy is included in HHOA to guide the position updates that improve population diversity and effectively prevent the algorithm from being trapped into local optimality points.The experimental results of ACF-DFS-HHOA confirmed its predominance in minimizing energy and data transmission time incurred during workflow execution.展开更多
One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consider...One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.展开更多
In modern datacenters, the most common method to solve the network latency problem is to minimize flow completion time during the transmission process. Following the soft real-time nature, the optimization of transpor...In modern datacenters, the most common method to solve the network latency problem is to minimize flow completion time during the transmission process. Following the soft real-time nature, the optimization of transport latency is relaxed to meet a flow's deadline in deadline-sensitive services. However, none of existing deadline-sensitive protocols consider deadline as a constraint condition of transmission.They can only simplify the objective of meeting a flow's deadline as a deadline-aware mechanism by assigning a higher priority for tight-deadline constrained flows to finish the transmission as soon as possible, which results in an unsatisfactory effect in the condition of high fan-in degree. It drives us to take a step back and rethink whether minimizing flow completion time is the optimal way in meeting flow's deadline. In this paper, we focus on the design of a soft real-time transport protocol with deadline constraint in datacenters and present a flow-based deadline scheduling scheme for datacenter networks(FBDS).FBDS makes the unilateral deadline-aware flow transmission with priority transform into a compound centralized single-machine deadlinebased flow scheduling decision. In addition, FBDS blocks the flow sets and postpones some flows with extra time until their deadlines to make room for the new arriving flows in order to improve the deadline meeting rate. Our simulation resultson flow completion time and deadline meeting rate reveal the potential of FBDS in terms of a considerable deadline-sensitive transport protocol for deadline-sensitive interactive services.展开更多
The proliferation of the global datasphere has forced cloud storage systems to evolve more complex architectures for different applications.The emergence of these application session requests and system daemon service...The proliferation of the global datasphere has forced cloud storage systems to evolve more complex architectures for different applications.The emergence of these application session requests and system daemon services has created large persistent flows with diverse performance requirements that need to coexist with other types of traffic.Current routing methods such as equal-cost multipath(ECMP)and Hedera do not take into consideration specific traffic characteristics nor performance requirements,which make these methods difficult to meet the quality of service(QoS)for high-priority flows.In this paper,we tailored the best routing for different kinds of cloud storage flows as an integer programming problem and utilized grey relational analysis(GRA)to solve this optimization problem.The resulting method is a GRAbased service-aware flow scheduling(GRSA)framework that considers requested flow types and network status to select appropriate routing paths for flows in cloud storage datacenter networks.The results from experiments carried out on a real traffic trace show that the proposed GRSA method can better balance traffic loads,conserve table space and reduce the average transmission delay for high-priority flows compared to ECMP and Hedera.展开更多
Decreasing the flow completion time(FCT) and increasing the throughput are two fundamental targets in datacenter networks(DCNs), but current mechanisms mostly focus on one of the problems. In this paper, we propose OF...Decreasing the flow completion time(FCT) and increasing the throughput are two fundamental targets in datacenter networks(DCNs), but current mechanisms mostly focus on one of the problems. In this paper, we propose OFMPC, an Open Flow based Multi Path Cooperation framework, to decrease FCT and increase the network throughput. OFMPC partitions the end-to-end transmission paths into two classes, which are low delay paths(LDPs) and high throughput paths(HTPs), respectively. Short flows are assigned to LDPs to avoid long queueing delay, while long flows are assigned to HTPs to guarantee their throughput. Meanwhile, a dynamic scheduling mechanism is presented to improve network efficiency. We evaluate OFMPC in Mininet emulator and a testbed, and the experimental results show that OFMPC can effectively decrease FCT. Besides, OFMPC also increases the throughput up to more than 84% of bisection bandwidth.展开更多
The layer 2 network technology is extending beyond its traditional local area implementation and finding wider acceptance in provider’s metropolitan area networks and large-scale cloud data center networks. This is m...The layer 2 network technology is extending beyond its traditional local area implementation and finding wider acceptance in provider’s metropolitan area networks and large-scale cloud data center networks. This is mainly due to its plug-and-play capability and native mobility support. Many efforts have been put to increase the bisection bandwidth in a layer 2 network, which has been constrained by the spanning tree protocol that a layer 2 network uses for preventing looping. The recent trend is to incorporate layer 3’s routing approach into a layer 2 network so that multiple paths can be used for forwarding traffic between any source-destination (S-D) node pair. ECMP (equal cost multipath) is one such example. However, ECMP may still be limited in generating multiple paths due to its shortest path (lowest cost) requirement. In this paper, we consider a non-shortest-path routing approach, called EPMP (Equal Preference Multi-Path) that can generate more paths than ECMP. The EPMP is based on the ordered semi-group algebra. In the EPMP routing, paths that differ in traditionally-defined costs, such as hops, bandwidth, etc., can be made equally preferred and thus become candidate paths. We found that, in comparison with ECMP, EPMP routing not only generates more paths, provides higher bisection bandwidth, but also allows bottleneck links in a hierarchical network to be identified when different traffic patterns are applied. EPMP is also versatile in that it can use various ways of path preference calculations to control the number and the length of paths, making it flexible (like policy-based routing) but also objective (like shortest path first routing) in calculating preferred paths.展开更多
The fast growth of datacenter networks,in terms of both scale and structural complexity,has led to an increase of network failure and hence brings new challenges to network management systems.As network failure such a...The fast growth of datacenter networks,in terms of both scale and structural complexity,has led to an increase of network failure and hence brings new challenges to network management systems.As network failure such as node failure is inevitable,how to find fault detection and diagnosis approaches that can effectively restore the network communication function and reduce the loss due to failure has been recognized as an important research problem in both academia and industry.This research focuses on exploring issues of node failure,and presents a proactive fault diagnosis algorithm called heuristic breadth-first detection(HBFD),through dynamically searching the spanning tree,analyzing the dial-test data and choosing a reasonable threshold to locate fault nodes.Both theoretical analysis and simulation results demonstrate that HBFD can diagnose node failures effectively,and take a smaller number of detection and a lower false rate without sacrificing accuracy.展开更多
In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT ...In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.展开更多
VERITAS NetBackup Data Center主机级备份与恢复解决方案适合多操作系统平台,为大规模的Unix、Linux、windows和NetWare环境提供全面的数据保护。通过NetBackup Data Center可以管理所有备份和恢复任务,为企业制定完全一致的备份策...VERITAS NetBackup Data Center主机级备份与恢复解决方案适合多操作系统平台,为大规模的Unix、Linux、windows和NetWare环境提供全面的数据保护。通过NetBackup Data Center可以管理所有备份和恢复任务,为企业制定完全一致的备份策略,包括为Oracle、SAP、Informix、Sybase、DB2UDB与Lotus Notes等提供的数据库识别和应用识别备份与恢复解决方案。展开更多
Due to the unprecedented development of low-latency interconnect technology,building large-scale disaggre-gated architecture is drawing more and more attention from both industry and academia.Resource disaggregation i...Due to the unprecedented development of low-latency interconnect technology,building large-scale disaggre-gated architecture is drawing more and more attention from both industry and academia.Resource disaggregation is a new way to organize the hardware resources of datacenters,and has the potential to overcome the limitations,e.g.,low re-source utilization and low reliability,of conventional datacenters.However,the emerging disaggregated architecture brings severe performance and latency problems to the existing cloud systems.In this paper,we take memory disaggregation as an example to demonstrate the unique challenges that the disaggregated datacenter poses to the existing cloud software stacks,e.g.,programming interface,language runtime,and operating system,and further discuss the possible ways to rein-vent the cloud systems.展开更多
Software-defined networking(SDN),a new networking paradigm decoupling the software control logic from the data forwarding hardware,promises to enable simpler management,more flexible resource usage and faster deployme...Software-defined networking(SDN),a new networking paradigm decoupling the software control logic from the data forwarding hardware,promises to enable simpler management,more flexible resource usage and faster deployment of network services.It opens network functionality,application programmability,and control-to-data communication interfaces that used to be closed in conventional network devices,offering endless opportunities but also challenges for both existing players and newcomers in the market.Through a comprehensive and comparative exploratory of SDN state-of-theart techniques,standardization activities and realistic applications,this article unveils historic and technical insights into the innovations that SDN offers toward an emerging open network eco-system.We closely examine the critical challenges and opportunities when the networking industry is reshaped by SDN.We further shed light on future development directions of SDN in broad application scenarios,ranging from cloud datacenters,network operating systems,and advanced wireless networking.展开更多
基金This work is supported by the National Natural Science Foundation of China(62102046,62072056)the Natural Science Foundation of Hunan Province(2022JJ30618,2020JJ2029)the Scientific Research Fund of Hunan Provincial Education Department(22B0300).
文摘Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In the data center network(DCN),short and long flows are sensitive to low latency and high throughput,respectively.The traditional security processing approaches,however,neglect these characteristics and requirements.This paper proposes a fine-grained security enhancement mechanism(SEM)to solve the problem of heterogeneous traffic and reduce the traffic completion time(FCT)of short flows while ensuring the security of multi-tenant traffic transmission.Specifically,for short flows in DCN,the lightweight GIFT encryption method is utilized.For Intra-DCN long flows and Inter-DCN traffic,the asymmetric elliptic curve encryption algorithm(ECC)is utilized.The NS-3 simulation results demonstrate that SEM dramatically reduces the FCT of short flows by 70%compared to several conventional encryption techniques,effectively enhancing the security and anti-attack of traffic transmission between DCNs in cloud computing environments.Additionally,SEM performs better than other encryption methods under high load and in largescale cloud environments.
基金supported by National Key Research and Development Project of China(2019YFB1802501)Research and Development Program in Key Areas of Guangdong Province(2018B010113001)Open Foundation of Science and Technology on Communication Networks Laboratory(No.6142104180106)。
文摘Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Equal-Cost Multi-Path(ECMP),the canonical solution in practice,and alternatives come with performance limitations or significant deployment challenges.In this work,we propose Closer,a scalable load balancing mechanism for cloud datacenters.Closer complies with the evaluation of technology including the deployment of Clos-based topologies,overlays for network virtualization,and virtual machine(VM)clusters.We decouple the system into centralized route calculation and distributed route decision to guarantee its flexibility and stability in large-scale networks.Leveraging In-band Network Telemetry(INT)to obtain precise link state information,a simple but efficient algorithm implements a weighted ECMP at the edge of fabric,which enables Closer to proactively map the flows to the appropriate path and avoid the excessive congestion of a single link.Closer achieves 2 to 7 times better flow completion time(FCT)at 70%network load than existing schemes that work with same hardware environment.
文摘Big data analytics, the process of organizing and analyzing data to get useful information, is one of the primary uses of cloud services today. Traditionally, collections of data are stored and processed in a single datacenter. As the volume of data grows at a tremendous rate, it is less efficient for only one datacenter to handle such large volumes of data from a performance point of view. Large cloud service providers are deploying datacenters geographically around the world for better performance and availability. A widely used approach for analytics of gee-distributed data is the centralized approach, which aggregates all the raw data from local datacenters to a central datacenter. However, it has been observed that this approach consumes a significant amount of bandwidth, leading to worse performance. A number of mechanisms have been proposed to achieve optimal performance when data analytics are performed over geo-distributed datacenters. In this paper, we present a survey on the representative mechanisms proposed in the literature for wide area analytics. We discuss basic ideas, present proposed architectures and mechanisms, and discuss several examples to illustrate existing work. We point out the limitations of these mechanisms, give comparisons, and conclude with our thoughts on future research directions.
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
基金supported by the National Basic Research Program(973)of China(No.2012CB315806)the National Natural Science Foundation of China(Nos.61103225 and61379149)+1 种基金the Jiangsu Provincial Natural Science Foundation(No.BK20140070)the Jiangsu Future Networks Innovation Institute Prospective Research Project on Future Networks,China(No.BY2013095-1-06)
文摘Although dense interconnection datacenter networks(DCNs)(e.g.,Fat Tree) provide multiple paths and high bisection bandwidth for each server pair,the widely used single-path Transmission Control Protocol(TCP)and equal-cost multipath(ECMP) transport protocols cannot achieve high resource utilization due to poor resource excavation and allocation.In this paper,we present LESSOR,a performance-oriented multipath forwarding scheme to improve DCNs' resource utilization.By adopting an Open Flow-based centralized control mechanism,LESSOR computes near-optimal transmission path and bandwidth provision for each flow according to the global network view while maintaining nearly real-time network view with the performance-oriented flow observing mechanism.Deployments and comprehensive simulations show that LESSOR can efficiently improve the network throughput,which is higher than ECMP by 4.9%–38.3% under different loads.LESSOR also provides 2%–27.7% improvement of throughput compared with Hedera.Besides,LESSOR decreases the average flow completion time significantly.
基金The work is supported by the National Natural Science Foundation of China under Grant Nos. 61572137 and 61728202, and Shanghai Innovation Action Project under Grant No. 16DZ1100200.
文摘Currently, different kinds of security devices are deployed in the cloud datacenter environment and tenants may choose their desired security services such as firewall and IDS (intrusion detection system). At the same time, tenants in cloud computing datacenters are dynamic and have different requirements. Therefore, security device deployment in cloud datacenters is very complex and may lead to inefficient resource utilization. In this paper, we study this problem in a software-defined network (SDN) based multi-tenant cloud datacenter environment. We propose a load-adaptive traffic steering and packet forwarding scheme called LTSS to solve the problem. Our scheme combines SDN controller with TagOper plug-in to determine the traffic paths with the minimum load for tenants and allows tenants to get their desired security services in SDN-based datacenter networks. We also build a prototype system for LTSS to verify its functionality and evaluate performance of our design.
基金supported by the National Key Research and Development Program of China under Grant No.2018YFB0204300the National Postdoctoral Program for Innovative Talents under Grant No.BX20190091Excellent Youth Foundation of Hunan Province(De-Zun Dong).
文摘Bursty traffic and thousands of concurrent flows incur inevitable network congestion in datacenter networks(DCNs)and then affect the overall performance.Various transport protocols are developed to mitigate the network congestion,including reactive and proactive protocols.Reactive schemes use different congestion signals,such as explicit congestion notification(ECN)and round trip time(RTT),to handle the network congestion after congestion arises.However,with the growth of scale and link speed in datacenters,reactive schemes encounter a significant problem of slow responding to congestion.On the contrary,proactive protocols(e.g.,credit-reservation protocols)are designed to avoid congestion before it occurs,and they have the advantages of zero data loss,fast convergence and low buffer occupancy.But credit-reservation protocols have not been widely deployed in current DCNs(e.g.,Microsoft,Amazon),which mainly deploy ECN-based protocols,such as data center transport control protocol(DCTCP)and data center quantized congestion notification(DCQCN).And in an actual deployment scenario,it is hard to guarantee one protocol to be deployed in every server at one time.When credit-reservation protocol is deployed to DCNs step by step,the network will be converted to multi-protocol state and will face the following fundamental challenges:1)unfairness,2)high buffer occupancy,and 3)heavy tail latency.Therefore,we propose Harmonia,aiming for converging ECN-based and credit-reservation protocols to fairness with minimal modification.To the best of our knowledge,Harmonia is the first to address the trouble of harmonizing proactive and reactive congestion control.Targeting the common ECN-based protocols-DCTCP and DCQCN,Harmonia leverages forward ECN and RTT to deliver real-time congestion information and redefines feedback control.After the evaluation,the results show that Harmonia effectively solves the unfair link allocation,eliminating the timeouts and addressing the buffer overflow.
文摘Cloud computing is considered to facilitate a more cost-effective way to deploy scientific workflows.The individual tasks of a scientific work-flow necessitate a diversified number of large states that are spatially located in different datacenters,thereby resulting in huge delays during data transmis-sion.Edge computing minimizes the delays in data transmission and supports the fixed storage strategy for scientific workflow private datasets.Therefore,this fixed storage strategy creates huge amount of bottleneck in its storage capacity.At this juncture,integrating the merits of cloud computing and edge computing during the process of rationalizing the data placement of scientific workflows and optimizing the energy and time incurred in data transmission across different datacentres remains a challenge.In this paper,Adaptive Cooperative Foraging and Dispersed Foraging Strategies-Improved Harris Hawks Optimization Algorithm(ACF-DFS-HHOA)is proposed for optimizing the energy and data transmission time in the event of placing data for a specific scientific workflow.This ACF-DFS-HHOA considered the factors influencing transmission delay and energy consumption of data centers into account during the process of rationalizing the data placement of scientific workflows.The adaptive cooperative and dispersed foraging strategy is included in HHOA to guide the position updates that improve population diversity and effectively prevent the algorithm from being trapped into local optimality points.The experimental results of ACF-DFS-HHOA confirmed its predominance in minimizing energy and data transmission time incurred during workflow execution.
基金supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry under Grant No.2010-2011 and Chinese Post-doctoral Research Foundation
文摘One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.
基金supported in part by the Fundamental Research Funds for the Central Universities under Grant No.2014JBM011 and No.2014YJS021in part by NSFC under Grant No.62171200,61422101,and 62132017+2 种基金in part by the Ph.D.Programs Foundation of MOE of China under Grant No.20130009110014in part by "NCET" under Grant No.NCET-12-0767in part by China Postdoctoral Science Foundation under Grant No.2015M570028,2015M580970
文摘In modern datacenters, the most common method to solve the network latency problem is to minimize flow completion time during the transmission process. Following the soft real-time nature, the optimization of transport latency is relaxed to meet a flow's deadline in deadline-sensitive services. However, none of existing deadline-sensitive protocols consider deadline as a constraint condition of transmission.They can only simplify the objective of meeting a flow's deadline as a deadline-aware mechanism by assigning a higher priority for tight-deadline constrained flows to finish the transmission as soon as possible, which results in an unsatisfactory effect in the condition of high fan-in degree. It drives us to take a step back and rethink whether minimizing flow completion time is the optimal way in meeting flow's deadline. In this paper, we focus on the design of a soft real-time transport protocol with deadline constraint in datacenters and present a flow-based deadline scheduling scheme for datacenter networks(FBDS).FBDS makes the unilateral deadline-aware flow transmission with priority transform into a compound centralized single-machine deadlinebased flow scheduling decision. In addition, FBDS blocks the flow sets and postpones some flows with extra time until their deadlines to make room for the new arriving flows in order to improve the deadline meeting rate. Our simulation resultson flow completion time and deadline meeting rate reveal the potential of FBDS in terms of a considerable deadline-sensitive transport protocol for deadline-sensitive interactive services.
基金supported by National Natural Science Foundation of China(Nos.61861013,61662018)Science and Technology Major Project of Guangxi(No.AA18118031)+2 种基金Guangxi Natural Science Foundation of China(No.2018 GXNSFAA050028)the Doctoral Research Foundation of Guilin University of Electronic Science and Technology(No.UF19033Y)Director Fund project of Key Laboratory of Cognitive Radio and Information Processing of Ministry of Education(No.CRKL190102)。
文摘The proliferation of the global datasphere has forced cloud storage systems to evolve more complex architectures for different applications.The emergence of these application session requests and system daemon services has created large persistent flows with diverse performance requirements that need to coexist with other types of traffic.Current routing methods such as equal-cost multipath(ECMP)and Hedera do not take into consideration specific traffic characteristics nor performance requirements,which make these methods difficult to meet the quality of service(QoS)for high-priority flows.In this paper,we tailored the best routing for different kinds of cloud storage flows as an integer programming problem and utilized grey relational analysis(GRA)to solve this optimization problem.The resulting method is a GRAbased service-aware flow scheduling(GRSA)framework that considers requested flow types and network status to select appropriate routing paths for flows in cloud storage datacenter networks.The results from experiments carried out on a real traffic trace show that the proposed GRSA method can better balance traffic loads,conserve table space and reduce the average transmission delay for high-priority flows compared to ECMP and Hedera.
基金supported by the State Key Development Program for Basic Research of China under Grant No.2012CB315806the National Natural Science Foundation of China under Grant Nos.61103225 and 61379149+1 种基金Jiangsu Province Natural Science Foundation of China under Grant No.BK20140070Jiangsu Future Networks Innovation Institute Prospective Research Project on Future Networks under Grant No.BY2013095-1-06
文摘Decreasing the flow completion time(FCT) and increasing the throughput are two fundamental targets in datacenter networks(DCNs), but current mechanisms mostly focus on one of the problems. In this paper, we propose OFMPC, an Open Flow based Multi Path Cooperation framework, to decrease FCT and increase the network throughput. OFMPC partitions the end-to-end transmission paths into two classes, which are low delay paths(LDPs) and high throughput paths(HTPs), respectively. Short flows are assigned to LDPs to avoid long queueing delay, while long flows are assigned to HTPs to guarantee their throughput. Meanwhile, a dynamic scheduling mechanism is presented to improve network efficiency. We evaluate OFMPC in Mininet emulator and a testbed, and the experimental results show that OFMPC can effectively decrease FCT. Besides, OFMPC also increases the throughput up to more than 84% of bisection bandwidth.
文摘The layer 2 network technology is extending beyond its traditional local area implementation and finding wider acceptance in provider’s metropolitan area networks and large-scale cloud data center networks. This is mainly due to its plug-and-play capability and native mobility support. Many efforts have been put to increase the bisection bandwidth in a layer 2 network, which has been constrained by the spanning tree protocol that a layer 2 network uses for preventing looping. The recent trend is to incorporate layer 3’s routing approach into a layer 2 network so that multiple paths can be used for forwarding traffic between any source-destination (S-D) node pair. ECMP (equal cost multipath) is one such example. However, ECMP may still be limited in generating multiple paths due to its shortest path (lowest cost) requirement. In this paper, we consider a non-shortest-path routing approach, called EPMP (Equal Preference Multi-Path) that can generate more paths than ECMP. The EPMP is based on the ordered semi-group algebra. In the EPMP routing, paths that differ in traditionally-defined costs, such as hops, bandwidth, etc., can be made equally preferred and thus become candidate paths. We found that, in comparison with ECMP, EPMP routing not only generates more paths, provides higher bisection bandwidth, but also allows bottleneck links in a hierarchical network to be identified when different traffic patterns are applied. EPMP is also versatile in that it can use various ways of path preference calculations to control the number and the length of paths, making it flexible (like policy-based routing) but also objective (like shortest path first routing) in calculating preferred paths.
基金supported by the National Natural Science Foundation of China(61877067 61572435)+3 种基金the joint fund project of the Ministry of Education–the China Mobile(MCM20170103)Xi’an Science and Technology Innovation Project(201805029YD7CG13-6)Ningbo Natural Science Foundation(2016A610035 2017A610119)
文摘The fast growth of datacenter networks,in terms of both scale and structural complexity,has led to an increase of network failure and hence brings new challenges to network management systems.As network failure such as node failure is inevitable,how to find fault detection and diagnosis approaches that can effectively restore the network communication function and reduce the loss due to failure has been recognized as an important research problem in both academia and industry.This research focuses on exploring issues of node failure,and presents a proactive fault diagnosis algorithm called heuristic breadth-first detection(HBFD),through dynamically searching the spanning tree,analyzing the dial-test data and choosing a reasonable threshold to locate fault nodes.Both theoretical analysis and simulation results demonstrate that HBFD can diagnose node failures effectively,and take a smaller number of detection and a lower false rate without sacrificing accuracy.
基金supported by the Research Management Center,Xiamen University Malaysia under XMUM Research Program Cycle 3(Grant No:XMUMRF/2019-C3/IECE/0006).
文摘In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.
文摘VERITAS NetBackup Data Center主机级备份与恢复解决方案适合多操作系统平台,为大规模的Unix、Linux、windows和NetWare环境提供全面的数据保护。通过NetBackup Data Center可以管理所有备份和恢复任务,为企业制定完全一致的备份策略,包括为Oracle、SAP、Informix、Sybase、DB2UDB与Lotus Notes等提供的数据库识别和应用识别备份与恢复解决方案。
基金supported by the National Key Research and Development Program of China under Grant No.2022YFB4500400.*Corresponding Author。
文摘Due to the unprecedented development of low-latency interconnect technology,building large-scale disaggre-gated architecture is drawing more and more attention from both industry and academia.Resource disaggregation is a new way to organize the hardware resources of datacenters,and has the potential to overcome the limitations,e.g.,low re-source utilization and low reliability,of conventional datacenters.However,the emerging disaggregated architecture brings severe performance and latency problems to the existing cloud systems.In this paper,we take memory disaggregation as an example to demonstrate the unique challenges that the disaggregated datacenter poses to the existing cloud software stacks,e.g.,programming interface,language runtime,and operating system,and further discuss the possible ways to rein-vent the cloud systems.
基金supported in part by agrant from the National Natural Science Foundation of China(NSFC)(Grant Nos.61370232 and 61520106005)
文摘Software-defined networking(SDN),a new networking paradigm decoupling the software control logic from the data forwarding hardware,promises to enable simpler management,more flexible resource usage and faster deployment of network services.It opens network functionality,application programmability,and control-to-data communication interfaces that used to be closed in conventional network devices,offering endless opportunities but also challenges for both existing players and newcomers in the market.Through a comprehensive and comparative exploratory of SDN state-of-theart techniques,standardization activities and realistic applications,this article unveils historic and technical insights into the innovations that SDN offers toward an emerging open network eco-system.We closely examine the critical challenges and opportunities when the networking industry is reshaped by SDN.We further shed light on future development directions of SDN in broad application scenarios,ranging from cloud datacenters,network operating systems,and advanced wireless networking.