The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In clo...The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.展开更多
An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years...An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years.Task scheduling and resource allocation in DGCs have gained more attention in both academia and industry as they are costly to manage because of high energy consumption.Many factors in DGCs,e.g.,prices of power grid,and the amount of green energy express strong spatial variations.The dramatic increase of arriving tasks brings a big challenge to minimize the energy cost of a DGC provider in a market where above factors all possess spatial variations.This work adopts a G/G/1 queuing system to analyze the performance of servers in DGCs.Based on it,a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based bees algorithm(SBA)to find SBA can minimize the energy cost of a DGC provider by optimally allocating tasks of heterogeneous applications among multiple DGCs,and specifying the running speed of each server and the number of powered-on servers in each GC while strictly meeting response time limits of tasks of all applications.Realistic databased experimental results prove that SBA achieves lower energy cost than several benchmark scheduling methods do.展开更多
Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the b...Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the sca...The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.展开更多
文摘The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.
基金supported in part by the National Natural Science Foundation of China(61802015,61703011)the Major Science and Technology Program for Water Pollution Control and Treatment of China(2018ZX07111005)+1 种基金the National Defense Pre-Research Foundation of China(41401020401,41401050102)the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah(D-422-135-1441)。
文摘An increasing number of enterprises have adopted cloud computing to manage their important business applications in distributed green cloud(DGC)systems for low response time and high cost-effectiveness in recent years.Task scheduling and resource allocation in DGCs have gained more attention in both academia and industry as they are costly to manage because of high energy consumption.Many factors in DGCs,e.g.,prices of power grid,and the amount of green energy express strong spatial variations.The dramatic increase of arriving tasks brings a big challenge to minimize the energy cost of a DGC provider in a market where above factors all possess spatial variations.This work adopts a G/G/1 queuing system to analyze the performance of servers in DGCs.Based on it,a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based bees algorithm(SBA)to find SBA can minimize the energy cost of a DGC provider by optimally allocating tasks of heterogeneous applications among multiple DGCs,and specifying the running speed of each server and the number of powered-on servers in each GC while strictly meeting response time limits of tasks of all applications.Realistic databased experimental results prove that SBA achieves lower energy cost than several benchmark scheduling methods do.
文摘Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
文摘The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.