Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption i...Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.展开更多
In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa ...In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa lot of energy and increases operational costs. Usually, IoT applications areplaced in the cloud to provide high-quality services and scalable resources.However, the existing cloud-based approach should consider the above constraintsto efficiently place and process IoT applications. In this paper, anefficient optimization approach for placing IoT applications in a multi-layerfog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into accountIoT application requirements, available resource capacities, and geographicallocations of servers, which would help optimize IoT application placementdecisions, considering multiple objectives such as data transmission, powerconsumption, and cost. Simulation experiments were conducted with variousIoT applications (e.g., augmented reality, infotainment, healthcare, andcompute-intensive) to simulate realistic scenarios. The results showed thatthe proposed approach outperformed the existing cloud-based approach interms of reducing data transmission by 64% and the associated processingand networking power consumption costs by up to 78%. Finally, a heuristicapproach was developed to validate and imitate the presented approach. Itshowed comparable outcomes to the proposed model, with the gap betweenthem reach to a maximum of 5.4% of the total power consumption.展开更多
In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of ser...In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.展开更多
In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order tor...In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.展开更多
The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server...The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server nodes provide the computation and storage capabilities for IoT devices through the task offloading process for accelerating tasks with large resource requests.However,the quantitative impact of different offloading architectures and policies on IoT applications’performance remains far from clear,especially with a dynamic and unpredictable range of connected physical and virtual devices.To this end,this work models the performance impact by exploiting a potential latency that exhibits within the environment of edge cloud.Also,it investigates and compares the effects of loosely-coupled(LC)and orchestrator-enabled(OE)architecture.The LC scheme can smoothly address task redistribution with less time consumption for the offloading sceneries with small scale and small task requests.Moreover,the OE scheme not only outperforms the LC scheme in the large-scale tasks requests and offloading occurs but also reduces the overall time by 28.19%.Finally,to achieve optimized solutions for optimal offloading placement with different constraints,orchestration is important.展开更多
Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon ...Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon by 2060,NEOM city has been proposed to be the“Saudi hub”for green energy,since NEOM is estimated to generate up to 120 Gigawatts(GW)of renewable energy by 2030.Nevertheless,the Information and Communication Technology(ICT)sector is considered a key contributor to global energy consumption and carbon emissions.The data centers are estimated to consume about 13%of the overall global electricity demand by 2030.Thus,reducing the total carbon emissions of the ICT sector plays a vital factor in achieving the Saudi plan to minimize global carbon emissions.Therefore,this paper aims to propose an eco-friendly approach using a Mixed-Integer Linear Programming(MILP)model to reduce the carbon emissions associated with ICT infrastructure in Saudi Arabia.This approach considers the Saudi National Fiber Network(SNFN)as the backbone of Saudi Internet infrastructure.First,we compare two different scenarios of data center locations.The first scenario considers a traditional cloud data center located in Jeddah and Riyadh,whereas the second scenario considers NEOM as a potential cloud data center new location to take advantage of its green energy infrastructure.Then,we calculate the energy consumption and carbon emissions of cloud data centers and their associated energy costs.After that,we optimize the energy efficiency of different cloud data centers’locations(in the SNFN)to reduce the associated carbon emissions and energy costs.Simulation results show that the proposed approach can save up to 94%of the carbon emissions and 62%of the energy cost compared to the current cloud physical topology.These savings are achieved due to the shifting of cloud data centers from cities that have conventional energy sources to a city that has rich in renewable energy sources.Finally,we design a heuristic algorithm to verify the proposed approach,and it gives equivalent results to the MILP model.展开更多
Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computa...Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.展开更多
With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,prov...With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.展开更多
With the striking rise in penetration of Cloud Computing,energy consumption is considered as one of the key cost factors that need to be managed within cloud providers’infrastructures.Subsequently,recent approaches a...With the striking rise in penetration of Cloud Computing,energy consumption is considered as one of the key cost factors that need to be managed within cloud providers’infrastructures.Subsequently,recent approaches and strategies based on reactive and proactive methods have been developed for managing cloud computing resources,where the energy consumption and the operational costs are minimized.However,to make better cost decisions in these strategies,the performance and energy awareness should be supported at both Physical Machine(PM)and Virtual Machine(VM)levels.Therefore,in this paper,a novel hybrid approach is proposed,which jointly considered the prediction of performance variation,energy consumption and cost of heterogeneous VMs.This approach aims to integrate auto-scaling with live migration as well as maintain the expected level of service performance,in which the power consumption and resource usage are utilized for estimating the VMs’total cost.Specifically,the service performance variation is handled by detecting the underloaded and overloaded PMs;thereby,the decision(s)is made in a cost-effective manner.Detailed testbed evaluation demonstrates that the proposed approach not only predicts the VMs workload and consumption of power but also estimates the overall cost of live migration and auto-scaling during service operation,with a high prediction accuracy on the basis of historical workload patterns.展开更多
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the Project Number(PSAU/2023/01/27268).
文摘Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
文摘In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa lot of energy and increases operational costs. Usually, IoT applications areplaced in the cloud to provide high-quality services and scalable resources.However, the existing cloud-based approach should consider the above constraintsto efficiently place and process IoT applications. In this paper, anefficient optimization approach for placing IoT applications in a multi-layerfog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into accountIoT application requirements, available resource capacities, and geographicallocations of servers, which would help optimize IoT application placementdecisions, considering multiple objectives such as data transmission, powerconsumption, and cost. Simulation experiments were conducted with variousIoT applications (e.g., augmented reality, infotainment, healthcare, andcompute-intensive) to simulate realistic scenarios. The results showed thatthe proposed approach outperformed the existing cloud-based approach interms of reducing data transmission by 64% and the associated processingand networking power consumption costs by up to 78%. Finally, a heuristicapproach was developed to validate and imitate the presented approach. Itshowed comparable outcomes to the proposed model, with the gap betweenthem reach to a maximum of 5.4% of the total power consumption.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work the project number(442/204).
文摘In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.
文摘In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.
文摘The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server nodes provide the computation and storage capabilities for IoT devices through the task offloading process for accelerating tasks with large resource requests.However,the quantitative impact of different offloading architectures and policies on IoT applications’performance remains far from clear,especially with a dynamic and unpredictable range of connected physical and virtual devices.To this end,this work models the performance impact by exploiting a potential latency that exhibits within the environment of edge cloud.Also,it investigates and compares the effects of loosely-coupled(LC)and orchestrator-enabled(OE)architecture.The LC scheme can smoothly address task redistribution with less time consumption for the offloading sceneries with small scale and small task requests.Moreover,the OE scheme not only outperforms the LC scheme in the large-scale tasks requests and offloading occurs but also reduces the overall time by 28.19%.Finally,to achieve optimized solutions for optimal offloading placement with different constraints,orchestration is important.
文摘Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon by 2060,NEOM city has been proposed to be the“Saudi hub”for green energy,since NEOM is estimated to generate up to 120 Gigawatts(GW)of renewable energy by 2030.Nevertheless,the Information and Communication Technology(ICT)sector is considered a key contributor to global energy consumption and carbon emissions.The data centers are estimated to consume about 13%of the overall global electricity demand by 2030.Thus,reducing the total carbon emissions of the ICT sector plays a vital factor in achieving the Saudi plan to minimize global carbon emissions.Therefore,this paper aims to propose an eco-friendly approach using a Mixed-Integer Linear Programming(MILP)model to reduce the carbon emissions associated with ICT infrastructure in Saudi Arabia.This approach considers the Saudi National Fiber Network(SNFN)as the backbone of Saudi Internet infrastructure.First,we compare two different scenarios of data center locations.The first scenario considers a traditional cloud data center located in Jeddah and Riyadh,whereas the second scenario considers NEOM as a potential cloud data center new location to take advantage of its green energy infrastructure.Then,we calculate the energy consumption and carbon emissions of cloud data centers and their associated energy costs.After that,we optimize the energy efficiency of different cloud data centers’locations(in the SNFN)to reduce the associated carbon emissions and energy costs.Simulation results show that the proposed approach can save up to 94%of the carbon emissions and 62%of the energy cost compared to the current cloud physical topology.These savings are achieved due to the shifting of cloud data centers from cities that have conventional energy sources to a city that has rich in renewable energy sources.Finally,we design a heuristic algorithm to verify the proposed approach,and it gives equivalent results to the MILP model.
基金In addition,the authors would like to thank the Deanship of Scientific Research,Prince Sattam bin Abdulaziz University,Al-Kharj,Saudi Arabia,for supporting this work.
文摘Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.
文摘With the expansion of cloud computing,optimizing the energy efficiency and cost of the cloud paradigm is considered significantly important,since it directly affects providers’revenue and customers’payment.Thus,providing prediction information of the cloud services can be very beneficial for the service providers,as they need to carefully predict their business growths and efficiently manage their resources.To optimize the use of cloud services,predictive mechanisms can be applied to improve resource utilization and reduce energy-related costs.However,such mechanisms need to be provided with energy awareness not only at the level of the Physical Machine(PM)but also at the level of the Virtual Machine(VM)in order to make improved cost decisions.Therefore,this paper presents a comprehensive literature review on the subject of energy-related cost issues and prediction models in cloud computing environments,along with an overall discussion of the closely related works.The outcomes of this research can be used and incorporated by predictive resource management techniques to make improved cost decisions assisted with energy awareness and leverage cloud resources efficiently.
文摘With the striking rise in penetration of Cloud Computing,energy consumption is considered as one of the key cost factors that need to be managed within cloud providers’infrastructures.Subsequently,recent approaches and strategies based on reactive and proactive methods have been developed for managing cloud computing resources,where the energy consumption and the operational costs are minimized.However,to make better cost decisions in these strategies,the performance and energy awareness should be supported at both Physical Machine(PM)and Virtual Machine(VM)levels.Therefore,in this paper,a novel hybrid approach is proposed,which jointly considered the prediction of performance variation,energy consumption and cost of heterogeneous VMs.This approach aims to integrate auto-scaling with live migration as well as maintain the expected level of service performance,in which the power consumption and resource usage are utilized for estimating the VMs’total cost.Specifically,the service performance variation is handled by detecting the underloaded and overloaded PMs;thereby,the decision(s)is made in a cost-effective manner.Detailed testbed evaluation demonstrates that the proposed approach not only predicts the VMs workload and consumption of power but also estimates the overall cost of live migration and auto-scaling during service operation,with a high prediction accuracy on the basis of historical workload patterns.