How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info...Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.展开更多
The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the sca...The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.展开更多
Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the b...Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.展开更多
This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The v...This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.展开更多
随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包...随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包括主供电、备用电源和不间断电源(Uninterruptible Power Supply,UPS)3个模块,并在相应模块中集成了智能配电管理系统、云平台能量管理系统、智能UPS能效管理系统。该研究为云计算数据中心的稳定运行和可持续发展提供了有力支撑。展开更多
As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its...As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its services.The cloud computing services transform the current education system to an advanced one.There exist several tools and services to make teaching and learning more interesting.In the higher education system,the data flow and basic operations are almost the same.These systems need to access cloud-based applications and services for their operational advancement and flexibility.Architecting a suitable cloud-based education system will leverage all the benefits of the cloud to its stakeholders.At the same time,educational institutions want to keep their sensitive information more secure.For that,they need to maintain their on-premises data center along with the cloud infrastructure.This paper proposes an advanced,flexible and secure hybrid cloud architecture to satisfy the growing demands of an education system.By sharing the proposed cloud infrastructure among several higher educational institutions,there is a possibility to implement a common education system among organizations.Moreover,this research demonstrates how a cloud-based education architecture can utilize the advantages of the cloud resources offered by several providers in a hybrid cloud environment.In addition,a reference architecture using Amazon Web Service(AWS)is proposed to implement a common university education system.展开更多
随着我国信息技术的不断发展,客户对信息数据的要求也在不断地提高,如需要多样化的数据、数据传递要更加迅速、数据要有较高的自身处理能力等,这就意味着必须要对网络进行灵活的控制。软件定义网络(Software Defined Network,SDN)技术...随着我国信息技术的不断发展,客户对信息数据的要求也在不断地提高,如需要多样化的数据、数据传递要更加迅速、数据要有较高的自身处理能力等,这就意味着必须要对网络进行灵活的控制。软件定义网络(Software Defined Network,SDN)技术的出现有效地实现了这一特点,其不仅可以实现资源的灵活配置、自动配置,还满足数据中心网络的应用需求。因此,就对基于SDN技术的数据中心基础网络构建进行研究。首先分析其构建数据中心的优势,然后从其基本架构、抽象服务等方面来对SDN技术的应用以及基础网络构建进行深入的探讨和分析,以此来为相关部门提供参考。展开更多
To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal imp...To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal impacts induced by hurricanes.The total life simulation(TLS)is adopted to project the local weather conditions at transmission lines and OWFs,before,during,and after the hurricane.The static power curve of wind turbines(WTs)is used to capture the output of OWFs,and the fragility analysis of transmission-line components is used to formulate the time-varying failure rates of transmission lines.A novel distributionally robust ambiguity set is constructed with a discrete support set,where the impacts of hurricanes are depicted by these supports.To minimize load sheddings and dropping workloads,the spatial and temporal demand response capabilities of CDCs according to task migration and delay tolerance are incorporated into resilient management.The flexibilities of CDC’s power consumption are integrated into a two-stage distributionally robust optimization problem with conditional value at risk(CVaR).Based on Lagrange duality,this problem is reformulated into its deterministic counterpart and solved by a novel decomposition method with hybrid cuts,admitting fewer iterations and a faster convergence rate.The effectiveness of the proposed resilient management strategy is verified through case studies conducted on the modified IEEERTS 24 system,which includes 4 data centers and 5 offshore wind farms.展开更多
Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and th...Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and the network distance between a tenant's VMs may considerably impact the tenant's Quality of Service(Qo S). In this study, we define and formulate the multi-tenant VM allocation problem in cloud data centers, considering the VM requirements of different tenants, and introducing the allocation goal of minimizing the sum of the VMs' network diameters of all tenants. Then, we propose a Layered Progressive resource allocation algorithm for multi-tenant cloud data centers based on the Multiple Knapsack Problem(LP-MKP). The LP-MKP algorithm uses a multi-stage layered progressive method for multi-tenant VM allocation and efficiently handles unprocessed tenants at each stage. This reduces resource fragmentation in cloud data centers, decreases the differences in the Qo S among tenants, and improves tenants' overall Qo S in cloud data centers. We perform experiments to evaluate the LP-MKP algorithm and demonstrate that it can provide significant gains over other allocation algorithms.展开更多
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金supported by the National Postdoctoral Science Foundation of China(2014M550068)
文摘Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.
文摘The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.
文摘Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.
文摘This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.
文摘随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包括主供电、备用电源和不间断电源(Uninterruptible Power Supply,UPS)3个模块,并在相应模块中集成了智能配电管理系统、云平台能量管理系统、智能UPS能效管理系统。该研究为云计算数据中心的稳定运行和可持续发展提供了有力支撑。
基金supported by the Deanship of Scientific Research,Prince Sattam Bin Abdulaziz University,KSA,Project Grant No.2019/02/10478,Almotiry O.N and Sha M,www.psau.edu.sa.
文摘As technology improves,several modernization efforts are taken in the process of teaching and learning.An effective education system should maintain global connectivity,federate security and deliver self-access to its services.The cloud computing services transform the current education system to an advanced one.There exist several tools and services to make teaching and learning more interesting.In the higher education system,the data flow and basic operations are almost the same.These systems need to access cloud-based applications and services for their operational advancement and flexibility.Architecting a suitable cloud-based education system will leverage all the benefits of the cloud to its stakeholders.At the same time,educational institutions want to keep their sensitive information more secure.For that,they need to maintain their on-premises data center along with the cloud infrastructure.This paper proposes an advanced,flexible and secure hybrid cloud architecture to satisfy the growing demands of an education system.By sharing the proposed cloud infrastructure among several higher educational institutions,there is a possibility to implement a common education system among organizations.Moreover,this research demonstrates how a cloud-based education architecture can utilize the advantages of the cloud resources offered by several providers in a hybrid cloud environment.In addition,a reference architecture using Amazon Web Service(AWS)is proposed to implement a common university education system.
文摘随着我国信息技术的不断发展,客户对信息数据的要求也在不断地提高,如需要多样化的数据、数据传递要更加迅速、数据要有较高的自身处理能力等,这就意味着必须要对网络进行灵活的控制。软件定义网络(Software Defined Network,SDN)技术的出现有效地实现了这一特点,其不仅可以实现资源的灵活配置、自动配置,还满足数据中心网络的应用需求。因此,就对基于SDN技术的数据中心基础网络构建进行研究。首先分析其构建数据中心的优势,然后从其基本架构、抽象服务等方面来对SDN技术的应用以及基础网络构建进行深入的探讨和分析,以此来为相关部门提供参考。
基金the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources under Grant LAPS21002the State Key Laboratory of Disaster Prevention and Reduction for Power Grid Transmission and Distribution Equipment under Grant SGHNFZ00FBYJJS2100047.
文摘To enhance the resilience of power systems with offshore wind farms(OWFs),a proactive scheduling scheme is proposed to unlock the flexibility of cloud data centers(CDCs)responding to uncertain spatial and temporal impacts induced by hurricanes.The total life simulation(TLS)is adopted to project the local weather conditions at transmission lines and OWFs,before,during,and after the hurricane.The static power curve of wind turbines(WTs)is used to capture the output of OWFs,and the fragility analysis of transmission-line components is used to formulate the time-varying failure rates of transmission lines.A novel distributionally robust ambiguity set is constructed with a discrete support set,where the impacts of hurricanes are depicted by these supports.To minimize load sheddings and dropping workloads,the spatial and temporal demand response capabilities of CDCs according to task migration and delay tolerance are incorporated into resilient management.The flexibilities of CDC’s power consumption are integrated into a two-stage distributionally robust optimization problem with conditional value at risk(CVaR).Based on Lagrange duality,this problem is reformulated into its deterministic counterpart and solved by a novel decomposition method with hybrid cuts,admitting fewer iterations and a faster convergence rate.The effectiveness of the proposed resilient management strategy is verified through case studies conducted on the modified IEEERTS 24 system,which includes 4 data centers and 5 offshore wind farms.
基金supported in part by the National Key Basic Research and Development (973) Program of China (No. 2011CB302600)the National Natural Science Foundation of China (No. 61222205)+1 种基金the Program for New Century Excellent Talents in Universitythe Fok Ying-Tong Education Foundation (No. 141066)
文摘Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and the network distance between a tenant's VMs may considerably impact the tenant's Quality of Service(Qo S). In this study, we define and formulate the multi-tenant VM allocation problem in cloud data centers, considering the VM requirements of different tenants, and introducing the allocation goal of minimizing the sum of the VMs' network diameters of all tenants. Then, we propose a Layered Progressive resource allocation algorithm for multi-tenant cloud data centers based on the Multiple Knapsack Problem(LP-MKP). The LP-MKP algorithm uses a multi-stage layered progressive method for multi-tenant VM allocation and efficiently handles unprocessed tenants at each stage. This reduces resource fragmentation in cloud data centers, decreases the differences in the Qo S among tenants, and improves tenants' overall Qo S in cloud data centers. We perform experiments to evaluate the LP-MKP algorithm and demonstrate that it can provide significant gains over other allocation algorithms.