Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info...Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The v...This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.展开更多
Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and th...Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and the network distance between a tenant's VMs may considerably impact the tenant's Quality of Service(Qo S). In this study, we define and formulate the multi-tenant VM allocation problem in cloud data centers, considering the VM requirements of different tenants, and introducing the allocation goal of minimizing the sum of the VMs' network diameters of all tenants. Then, we propose a Layered Progressive resource allocation algorithm for multi-tenant cloud data centers based on the Multiple Knapsack Problem(LP-MKP). The LP-MKP algorithm uses a multi-stage layered progressive method for multi-tenant VM allocation and efficiently handles unprocessed tenants at each stage. This reduces resource fragmentation in cloud data centers, decreases the differences in the Qo S among tenants, and improves tenants' overall Qo S in cloud data centers. We perform experiments to evaluate the LP-MKP algorithm and demonstrate that it can provide significant gains over other allocation algorithms.展开更多
To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal imp...To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal impacts induced by hurricanes.The total life simulation(TLS)is adopted to project the local weather conditions at transmission lines and OWFs,before,during,and after the hurricane.The static power curve of wind turbines(WTs)is used to capture the output of OWFs,and the fragility analysis of transmission-line components is used to formulate the time-varying failure rates of transmission lines.A novel distributionally robust ambiguity set is constructed with a discrete support set,where the impacts of hurricanes are depicted by these supports.To minimize load sheddings and dropping workloads,the spatial and temporal demand response capabilities of CDCs according to task migration and delay tolerance are incorporated into resilient management.The flexibilities of CDC’s power consumption are integrated into a two-stage distributionally robust optimization problem with conditional value at risk(CVaR).Based on Lagrange duality,this problem is reformulated into its deterministic counterpart and solved by a novel decomposition method with hybrid cuts,admitting fewer iterations and a faster convergence rate.The effectiveness of the proposed resilient management strategy is verified through case studies conducted on the modified IEEERTS 24 system,which includes 4 data centers and 5 offshore wind farms.展开更多
Cloud data centers, such as Amazon EC2, host myriad big data applications using Virtual Machines(VMs). As these applications are communication-intensive, optimizing network transfer between VMs is critical to the perf...Cloud data centers, such as Amazon EC2, host myriad big data applications using Virtual Machines(VMs). As these applications are communication-intensive, optimizing network transfer between VMs is critical to the performance of these applications and network utilization of data centers. Previous studies have addressed this issue by scheduling network flows with coflow semantics or optimizing VM placement with traffic considerations.However, coflow scheduling and VM placement have been conducted orthogonally. In fact, these two mechanisms are mutually dependent, and optimizing these two complementary degrees of freedom independently turns out to be suboptimal. In this paper, we present VirtCO, a practical framework that jointly schedules coflows and places VMs ahead of VM launch to optimize the overall performance of data center applications. We model the joint coflow scheduling and VM placement optimization problem, and propose effective heuristics for solving it. We further implement VirtCO with OpenStack and deploy it in a testbed environment. Extensive evaluation of real-world traces shows that compared with state-of-the-art solutions, VirtCO greatly reduces the average coflow completion time by up to 36.5%. This new framework is also compatible with and readily deployable within existing data center architectures.展开更多
With the wide application of virtualization technology in cloud data centers, how to effectively place virtual machine (VM) is becoming a major issue for cloud providers. The existing virtual machine placement (VMP...With the wide application of virtualization technology in cloud data centers, how to effectively place virtual machine (VM) is becoming a major issue for cloud providers. The existing virtual machine placement (VMP) solutions are mainly to optimize server resources. However, they pay little consideration on network resources optimization, and they do not concern the impact of the network topology and the current network traffic. A multi-resource constraints VMP scheme is proposed. Firstly, the authors attempt to reduce the total communication traffic in the data center network, which is abstracted as a quadratic assignment problem; and then aim at optimizing network maximum link utilization (MLU). On the condition of slight variation of the total traffic, minimizing MLU can balance network traffic distribution and reduce network congestion hotspots, a classic combinatorial optimization problem as well as NP-hard problem. Ant colony optimization and 2-opt local search are combined to solve the problem. Simulation shows that MLU is decreased by 20%, and the number of hot links is decreased by 37%.展开更多
Cloud computing as an emerging technology promises to provide reliable and available services on de- mand. However, offering services for mobile requirements without dynamic and adaptive migration may hurt the perform...Cloud computing as an emerging technology promises to provide reliable and available services on de- mand. However, offering services for mobile requirements without dynamic and adaptive migration may hurt the performance of deployed services. In this paper, we propose MAMOC, a cost-effective approach for selecting the server and migrating services to attain enhanced QoS more econom- ically. The goal of MAMOC is to minimize the total operating cost while guaranteeing the constraints of resource de- mands, storage capacity, access latency and economies, including selling price and reputation grade. First, we devise an objective optimal model with multi-constraints, describing the relationship among operating cost and the above con- straints. Second, a normalized method is adopted to calculate the operating cost for each candidate VM. Then we give a de- tailed presentation on the online algorithm MAMOC, which determines the optimal server. To evaluate the performance of our proposal, we conducted extensive simulations on three typical network topologies and a realistic data center net- work. Results show that MAMOC is scalable and robust with the larger scales of requests and VMs in cloud environment. Moreover, MAMOC decreases the competitive ratio by identifying the optimal migration paths, while ensuring the constraints of SLA as satisfying as possible.展开更多
The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the sca...The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.展开更多
Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the b...Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.展开更多
As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its...As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its services.The cloud computing services transform the current education system to an advanced one.There exist several tools and services to make teaching and learning more interesting.In the higher education system,the data flow and basic operations are almost the same.These systems need to access cloud-based applications and services for their operational advancement and flexibility.Architecting a suitable cloud-based education system will leverage all the benefits of the cloud to its stakeholders.At the same time,educational institutions want to keep their sensitive information more secure.For that,they need to maintain their on-premises data center along with the cloud infrastructure.This paper proposes an advanced,flexible and secure hybrid cloud architecture to satisfy the growing demands of an education system.By sharing the proposed cloud infrastructure among several higher educational institutions,there is a possibility to implement a common education system among organizations.Moreover,this research demonstrates how a cloud-based education architecture can utilize the advantages of the cloud resources offered by several providers in a hybrid cloud environment.In addition,a reference architecture using Amazon Web Service(AWS)is proposed to implement a common university education system.展开更多
基金supported by the National Postdoctoral Science Foundation of China(2014M550068)
文摘Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
文摘This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.
基金supported in part by the National Key Basic Research and Development (973) Program of China (No. 2011CB302600)the National Natural Science Foundation of China (No. 61222205)+1 种基金the Program for New Century Excellent Talents in Universitythe Fok Ying-Tong Education Foundation (No. 141066)
文摘Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and the network distance between a tenant's VMs may considerably impact the tenant's Quality of Service(Qo S). In this study, we define and formulate the multi-tenant VM allocation problem in cloud data centers, considering the VM requirements of different tenants, and introducing the allocation goal of minimizing the sum of the VMs' network diameters of all tenants. Then, we propose a Layered Progressive resource allocation algorithm for multi-tenant cloud data centers based on the Multiple Knapsack Problem(LP-MKP). The LP-MKP algorithm uses a multi-stage layered progressive method for multi-tenant VM allocation and efficiently handles unprocessed tenants at each stage. This reduces resource fragmentation in cloud data centers, decreases the differences in the Qo S among tenants, and improves tenants' overall Qo S in cloud data centers. We perform experiments to evaluate the LP-MKP algorithm and demonstrate that it can provide significant gains over other allocation algorithms.
基金the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources under Grant LAPS21002the State Key Laboratory of Disaster Prevention and Reduction for Power Grid Transmission and Distribution Equipment under Grant SGHNFZ00FBYJJS2100047.
文摘To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal impacts induced by hurricanes.The total life simulation(TLS)is adopted to project the local weather conditions at transmission lines and OWFs,before,during,and after the hurricane.The static power curve of wind turbines(WTs)is used to capture the output of OWFs,and the fragility analysis of transmission-line components is used to formulate the time-varying failure rates of transmission lines.A novel distributionally robust ambiguity set is constructed with a discrete support set,where the impacts of hurricanes are depicted by these supports.To minimize load sheddings and dropping workloads,the spatial and temporal demand response capabilities of CDCs according to task migration and delay tolerance are incorporated into resilient management.The flexibilities of CDC’s power consumption are integrated into a two-stage distributionally robust optimization problem with conditional value at risk(CVaR).Based on Lagrange duality,this problem is reformulated into its deterministic counterpart and solved by a novel decomposition method with hybrid cuts,admitting fewer iterations and a faster convergence rate.The effectiveness of the proposed resilient management strategy is verified through case studies conducted on the modified IEEERTS 24 system,which includes 4 data centers and 5 offshore wind farms.
基金supported by the National Key R&D Program of China(No.2017YFB1003000)the National Natural Science Foundation of China(Nos.61572129,61602112,61502097,61702096,61320106007,and 61632008)+4 种基金the International S&T Cooperation Program of China(No.2015DFA10490)the National Science Foundation of Jiangsu Province(Nos.BK20160695 and BK20170689)the Jiangsu Provincial Key Laboratory of Network and Information Security(No.BM2003201)the Key Laboratory of Computer Network and InformationIntegration of Ministry of Education of China(No.93K-9)supported by the Collaborative Innovation Center of Novel Software Technology and Industrialization and Collaborative Innovation Center of Wireless Communications Technology
文摘Cloud data centers, such as Amazon EC2, host myriad big data applications using Virtual Machines(VMs). As these applications are communication-intensive, optimizing network transfer between VMs is critical to the performance of these applications and network utilization of data centers. Previous studies have addressed this issue by scheduling network flows with coflow semantics or optimizing VM placement with traffic considerations.However, coflow scheduling and VM placement have been conducted orthogonally. In fact, these two mechanisms are mutually dependent, and optimizing these two complementary degrees of freedom independently turns out to be suboptimal. In this paper, we present VirtCO, a practical framework that jointly schedules coflows and places VMs ahead of VM launch to optimize the overall performance of data center applications. We model the joint coflow scheduling and VM placement optimization problem, and propose effective heuristics for solving it. We further implement VirtCO with OpenStack and deploy it in a testbed environment. Extensive evaluation of real-world traces shows that compared with state-of-the-art solutions, VirtCO greatly reduces the average coflow completion time by up to 36.5%. This new framework is also compatible with and readily deployable within existing data center architectures.
基金supported by the National Natural Science Foundation of China(61002011)the National High Technology Research and Development Program of China(863 Program)(2013AA013303)+2 种基金the Fundamental Research Funds for the Central Universities(2013RC1104)the Natural Science Foundation of Gansu Province,China(1308RJZA306)the Open Fund of the State Key Laboratory of Software Development Environment(SKLSDE-2009KF-2-08)
文摘With the wide application of virtualization technology in cloud data centers, how to effectively place virtual machine (VM) is becoming a major issue for cloud providers. The existing virtual machine placement (VMP) solutions are mainly to optimize server resources. However, they pay little consideration on network resources optimization, and they do not concern the impact of the network topology and the current network traffic. A multi-resource constraints VMP scheme is proposed. Firstly, the authors attempt to reduce the total communication traffic in the data center network, which is abstracted as a quadratic assignment problem; and then aim at optimizing network maximum link utilization (MLU). On the condition of slight variation of the total traffic, minimizing MLU can balance network traffic distribution and reduce network congestion hotspots, a classic combinatorial optimization problem as well as NP-hard problem. Ant colony optimization and 2-opt local search are combined to solve the problem. Simulation shows that MLU is decreased by 20%, and the number of hot links is decreased by 37%.
文摘Cloud computing as an emerging technology promises to provide reliable and available services on de- mand. However, offering services for mobile requirements without dynamic and adaptive migration may hurt the performance of deployed services. In this paper, we propose MAMOC, a cost-effective approach for selecting the server and migrating services to attain enhanced QoS more econom- ically. The goal of MAMOC is to minimize the total operating cost while guaranteeing the constraints of resource de- mands, storage capacity, access latency and economies, including selling price and reputation grade. First, we devise an objective optimal model with multi-constraints, describing the relationship among operating cost and the above con- straints. Second, a normalized method is adopted to calculate the operating cost for each candidate VM. Then we give a de- tailed presentation on the online algorithm MAMOC, which determines the optimal server. To evaluate the performance of our proposal, we conducted extensive simulations on three typical network topologies and a realistic data center net- work. Results show that MAMOC is scalable and robust with the larger scales of requests and VMs in cloud environment. Moreover, MAMOC decreases the competitive ratio by identifying the optimal migration paths, while ensuring the constraints of SLA as satisfying as possible.
文摘The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.
文摘Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.
基金supported by the Deanship of Scientific Research,Prince Sattam Bin Abdulaziz University,KSA,Project Grant No.2019/02/10478,Almotiry O.N and Sha M,www.psau.edu.sa.
文摘As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its services.The cloud computing services transform the current education system to an advanced one.There exist several tools and services to make teaching and learning more interesting.In the higher education system,the data flow and basic operations are almost the same.These systems need to access cloud-based applications and services for their operational advancement and flexibility.Architecting a suitable cloud-based education system will leverage all the benefits of the cloud to its stakeholders.At the same time,educational institutions want to keep their sensitive information more secure.For that,they need to maintain their on-premises data center along with the cloud infrastructure.This paper proposes an advanced,flexible and secure hybrid cloud architecture to satisfy the growing demands of an education system.By sharing the proposed cloud infrastructure among several higher educational institutions,there is a possibility to implement a common education system among organizations.Moreover,this research demonstrates how a cloud-based education architecture can utilize the advantages of the cloud resources offered by several providers in a hybrid cloud environment.In addition,a reference architecture using Amazon Web Service(AWS)is proposed to implement a common university education system.