Mostly, cloud agreements are signed between the consumer and the provider using online click-through agreements. Several issues and conflicts exist in the negotiation of cloud agreement terms due to the legal and ambi...Mostly, cloud agreements are signed between the consumer and the provider using online click-through agreements. Several issues and conflicts exist in the negotiation of cloud agreement terms due to the legal and ambiguous terms in Service Level Agreements (SLA). Semantic knowledge applied during the formation and negotiation of SLA can overcome these issues. Cloud SLA negotiation consists of numerous activities such as formation of SLA templates, publishing it in registry, verification and validation of SLA, monitoring for violation, logging and reporting and termination. Though these activities are interleaved with each other, semantic synchronization is still lacking. To overcome this, a novel SLA life cycle using semantic knowledge to automate the cloud negotiation has been formulated. Semantic web platform using ontologies is designed, developed and evaluated. The resultant platform increases the task efficiency of the consumer and the provider during negotiation. Precision and recall scores for Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) SLAs were calculated. And it reveals that applying semantic knowledge helps the extraction of meaningful answers from the cloud actors.展开更多
From the viewpoint of service level agreements, the transmission accuracy rate is one of critical performance indicators to assess internet quality for system managers and customers. Under the assumption that each arc...From the viewpoint of service level agreements, the transmission accuracy rate is one of critical performance indicators to assess internet quality for system managers and customers. Under the assumption that each arc's capacity is deterministic, the quickest path problem is to find a path sending a specific of data such that the transmission time is minimized. However, in many real-life networks such as computer networks, each arc has stochastic capacity, lead time and accuracy rate. Such a network is named a multi-state computer network. Under both assured accuracy rate and time constraints, we extend the quickest path problem to compute the probability that d units of data can be sent through multiple minimal paths simultaneously. Such a probability named system reliability is a performance indicator to provide to managers for understanding the ability of system and improvement. An efficient algorithm is proposed to evaluate the system reliability in terms of the approach of minimal paths.展开更多
The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to...The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to low cost and flexibility more and more consumers delegate their tasks to cloud providers,the SLA emerges as a key aspect between the consumers and providers.Continuous monitoring of Quality of Service(QoS)attributes is required to implement SLAs because of the complex nature of cloud communication.Many other factors,such as user reliability,satisfaction,and penalty on violations are also taken into account.Currently,there is no such policy of cloud SLA monitoring to minimize SLA violations.In this work,we have proposed a cloud SLA monitoring policy by dividing a monitoring session into two parts,for critical and non-critical parameters.The critical and non-critical parameters will be decided on the interest of the consumer during SLA negotiation.This will help to shape a new comprehensive SLA based Proactive Resource Allocation Approach(RPAA)which will monitor SLA at runtime,analyze the SLA parameters and try to find the possibility of SLA violations.We also have implemented an adaptive system for allocating cloud IT resources based on SLA violations and detection.We have defined two main components of SLA-PRAA i.e.,(a)Handler and(b)Accounting and Billing Manager.We have also described the function of both components through algorithms.The experimental results validate the performance of our proposed method in comparison with state-of-the-art cloud SLA policies.展开更多
On-demand availability and resource elasticity features of Cloud computing have attracted the focus of various research domains.Mobile cloud computing is one of these domains where complex computation tasks are offloa...On-demand availability and resource elasticity features of Cloud computing have attracted the focus of various research domains.Mobile cloud computing is one of these domains where complex computation tasks are offloaded to the cloud resources to augment mobile devices’cognitive capacity.However,the flexible provisioning of cloud resources is hindered by uncertain offloading workloads and significant setup time of cloud virtual machines(VMs).Furthermore,any delays at the cloud end would further aggravate the miseries of real-time tasks.To resolve these issues,this paper proposes an auto-scaling framework(ACF)that strives to maintain the quality of service(QoS)for the end users as per the service level agreement(SLA)negotiated assurance level for service availability.In addition,it also provides an innovative solution for dealing with the VM startup overheads without truncating the running tasks.Unlike the waiting cost and service cost tradeoff-based systems or threshold-rule-based systems,it does not require strict tuning in the waiting costs or in the threshold rules for enhancing the QoS.We explored the design space of the ACF system with the CloudSim simulator.The extensive sets of experiments demonstrate the effectiveness of the ACF system in terms of good reduction in energy dissipation at the mobile devices and improvement in the QoS.At the same time,the proposed ACF system also reduces the monetary costs of the service providers.展开更多
In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term ...In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term consequences that businesses encounter. This study integrates findings from various research, including quantitative reports, drawing upon real-world incidents faced by both small and large enterprises. This investigation emphasizes the profound intangible costs, such as trade name devaluation and potential damage to brand reputation, which can persist long after the breach. By collating insights from industry experts and a myriad of research, the study provides a comprehensive perspective on the profound, multi-dimensional impacts of cybersecurity incidents. The overarching aim is to underscore the often-underestimated scope and depth of these breaches, emphasizing the entire timeline post-incident and the urgent need for fortified preventative and reactive measures in the digital domain.展开更多
Personal cloud computing is an emerging trend in the computer industry. For a sustainable service, cloud computing services must control user access. The essential business characteristics of cloud computing are payme...Personal cloud computing is an emerging trend in the computer industry. For a sustainable service, cloud computing services must control user access. The essential business characteristics of cloud computing are payment status and service level agreement. This work proposes a novel access control method for personal cloud service business. The proposed method sets metadata, policy analysis rules, and access denying rules. Metadata define the structure of access control policies and user requirements for cloud services. The policy analysis rules are used to compare conflicts and redundancies between access control policies. The access denying rules apply policies for inhibiting inappropriate access. The ontology is a theoretical foundation of this method. In this work, ontologies for payment status, access permission, service level, and the cloud provide semantic information needed to execute rules. A scenario of personal data backup cloud service is also provided in this work. This work potentially provides cloud service providers with a convenient method of controlling user access according to changeable business and marketing strategies.展开更多
Cloud computing is providing IT services to its customer based on Service level agreements(SLAs).It is important for cloud service providers to provide reliable Quality of service(QoS)and to maintain SLAs accountabili...Cloud computing is providing IT services to its customer based on Service level agreements(SLAs).It is important for cloud service providers to provide reliable Quality of service(QoS)and to maintain SLAs accountability.Cloud service providers need to predict possible service violations before the emergence of an issue to perform remedial actions for it.Cloud users’major concerns;the factors for service reliability are based on response time,accessibility,availability,and speed.In this paper,we,therefore,experiment with the parallel mutant-Particle swarm optimization(PSO)for the detection and predictions of QoS violations in terms of response time,speed,accessibility,and availability.This paper also compares Simple-PSO and Parallel MutantPSO.In simulation results,it is observed that the proposed Parallel MutantPSO solution for cloud QoS violation prediction achieves 94%accuracy which is many accurate results and is computationally the fastest technique in comparison of conventional PSO technique.展开更多
Cloud resource scheduling is gaining prominence with the increasingtrends of reliance on cloud infrastructure solutions. Numerous sets of cloudresource scheduling models were evident in the literature. Cloud resource ...Cloud resource scheduling is gaining prominence with the increasingtrends of reliance on cloud infrastructure solutions. Numerous sets of cloudresource scheduling models were evident in the literature. Cloud resource scheduling refers to the distinct set of algorithms or programs the service providersengage to maintain the service level allocation for various resources over a virtualenvironment. The model proposed in this manuscript schedules resources of virtual machines under potential volatility aspects, which can be applied for anypriority metric chosen by the server administrators. Also, the model can be flexible for any time frame-based analysis of the load factor. The model discussed inthis manuscript relies on the Bollinger Bands tool for understanding the potentialvolatility aspects of a Virtual Machine. The experimental study of the model compared to the contemporary load balancing model called STLB (Starvation Threshold-based Load Balancing) refers to a simple and potential model that can bemore pragmatic for sustainable ways of load balancing.展开更多
In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utiliza...In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.展开更多
From the viewpoint of service level agreements,data transmission accuracy is one of the critical performances for assessing Internet by service providers and enterprise customers.The stochastic computer network(SCN),i...From the viewpoint of service level agreements,data transmission accuracy is one of the critical performances for assessing Internet by service providers and enterprise customers.The stochastic computer network(SCN),in which each edge has several capacities and the accuracy rate,has multiple terminals.This paper is aimed mainly to evaluate the system reliability for an SCN,where system reliability is the probability that the demand can be fulfilled under the total accuracy rate.A minimal capacity vector allows the system to transmit demand to each terminal under the total accuracy rate.This study proposes an efficient algorithm to find all minimal capacity vectors by minimal paths.The system reliability can then be computed in terms of all minimal capacity vectors by the recursive sum of disjoint products(RSDP) algorithm.展开更多
With the proliferation of cloud services and development of fine-grained virtualization techniques, the Cloud Management System (CMS) is required to manage multiple resources efficiently for the large-scale, highden...With the proliferation of cloud services and development of fine-grained virtualization techniques, the Cloud Management System (CMS) is required to manage multiple resources efficiently for the large-scale, highdensity computing units. Specifically, providing guaranteed networking Service Level Agreement (SLA) has become a challenge. This paper proposes MN-SLA (Modular Networking SLA), a framework to provide networking SLA and to enable its seamless integration with existing CMSes. Targeting at a modular, general, robust, and efficient design, MN-SLA abstracts general interacting Application Programming Interfaces (APIs) between CMS and SLA subsystem, and it is able to accomplish the integration with minor modifications to CMS. The evaluations based on large scale simulation show that the proposed networking SLA scheduling is promising in terms of resource utilization, being able to accommodate at least 1.4x the number of instances of its competitors.展开更多
Cloud computing is a very promising paradigm of service-oriented computing. One major benefit of cloud computing is its elasticity, i.e., the system's capacity to provide and remove resources automatically at runtime...Cloud computing is a very promising paradigm of service-oriented computing. One major benefit of cloud computing is its elasticity, i.e., the system's capacity to provide and remove resources automatically at runtime. For that, it is essential to design and implement an efficient and effective technique that takes full advantage of the system's potential flexibility. This paper presents a non-intrusive approach that monitors the performance of relational database management systems in a cloud infrastructure, and automatically makes decisions to maximize the efficiency of the provider's environment while still satisfying agreed upon "service level agreements" (SLAs). Our experiments conducted on Amazon's cloud infrastructure, confirm that our technique is capable of automatically and dynamically adjusting the system's allocated resources observing the SLA.展开更多
Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected respo...Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.展开更多
文摘Mostly, cloud agreements are signed between the consumer and the provider using online click-through agreements. Several issues and conflicts exist in the negotiation of cloud agreement terms due to the legal and ambiguous terms in Service Level Agreements (SLA). Semantic knowledge applied during the formation and negotiation of SLA can overcome these issues. Cloud SLA negotiation consists of numerous activities such as formation of SLA templates, publishing it in registry, verification and validation of SLA, monitoring for violation, logging and reporting and termination. Though these activities are interleaved with each other, semantic synchronization is still lacking. To overcome this, a novel SLA life cycle using semantic knowledge to automate the cloud negotiation has been formulated. Semantic web platform using ontologies is designed, developed and evaluated. The resultant platform increases the task efficiency of the consumer and the provider during negotiation. Precision and recall scores for Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) SLAs were calculated. And it reveals that applying semantic knowledge helps the extraction of meaningful answers from the cloud actors.
基金supported in part by the National Science Council,Taiwan,China,under Grant No.NSC 101-2628-E-011-005-MY3
文摘From the viewpoint of service level agreements, the transmission accuracy rate is one of critical performance indicators to assess internet quality for system managers and customers. Under the assumption that each arc's capacity is deterministic, the quickest path problem is to find a path sending a specific of data such that the transmission time is minimized. However, in many real-life networks such as computer networks, each arc has stochastic capacity, lead time and accuracy rate. Such a network is named a multi-state computer network. Under both assured accuracy rate and time constraints, we extend the quickest path problem to compute the probability that d units of data can be sent through multiple minimal paths simultaneously. Such a probability named system reliability is a performance indicator to provide to managers for understanding the ability of system and improvement. An efficient algorithm is proposed to evaluate the system reliability in terms of the approach of minimal paths.
文摘The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to low cost and flexibility more and more consumers delegate their tasks to cloud providers,the SLA emerges as a key aspect between the consumers and providers.Continuous monitoring of Quality of Service(QoS)attributes is required to implement SLAs because of the complex nature of cloud communication.Many other factors,such as user reliability,satisfaction,and penalty on violations are also taken into account.Currently,there is no such policy of cloud SLA monitoring to minimize SLA violations.In this work,we have proposed a cloud SLA monitoring policy by dividing a monitoring session into two parts,for critical and non-critical parameters.The critical and non-critical parameters will be decided on the interest of the consumer during SLA negotiation.This will help to shape a new comprehensive SLA based Proactive Resource Allocation Approach(RPAA)which will monitor SLA at runtime,analyze the SLA parameters and try to find the possibility of SLA violations.We also have implemented an adaptive system for allocating cloud IT resources based on SLA violations and detection.We have defined two main components of SLA-PRAA i.e.,(a)Handler and(b)Accounting and Billing Manager.We have also described the function of both components through algorithms.The experimental results validate the performance of our proposed method in comparison with state-of-the-art cloud SLA policies.
基金This research work is funded by TEQIP-III under Assistantship Head:1.3.2.2 in PFMS dated 29.06.2021.
文摘On-demand availability and resource elasticity features of Cloud computing have attracted the focus of various research domains.Mobile cloud computing is one of these domains where complex computation tasks are offloaded to the cloud resources to augment mobile devices’cognitive capacity.However,the flexible provisioning of cloud resources is hindered by uncertain offloading workloads and significant setup time of cloud virtual machines(VMs).Furthermore,any delays at the cloud end would further aggravate the miseries of real-time tasks.To resolve these issues,this paper proposes an auto-scaling framework(ACF)that strives to maintain the quality of service(QoS)for the end users as per the service level agreement(SLA)negotiated assurance level for service availability.In addition,it also provides an innovative solution for dealing with the VM startup overheads without truncating the running tasks.Unlike the waiting cost and service cost tradeoff-based systems or threshold-rule-based systems,it does not require strict tuning in the waiting costs or in the threshold rules for enhancing the QoS.We explored the design space of the ACF system with the CloudSim simulator.The extensive sets of experiments demonstrate the effectiveness of the ACF system in terms of good reduction in energy dissipation at the mobile devices and improvement in the QoS.At the same time,the proposed ACF system also reduces the monetary costs of the service providers.
文摘In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term consequences that businesses encounter. This study integrates findings from various research, including quantitative reports, drawing upon real-world incidents faced by both small and large enterprises. This investigation emphasizes the profound intangible costs, such as trade name devaluation and potential damage to brand reputation, which can persist long after the breach. By collating insights from industry experts and a myriad of research, the study provides a comprehensive perspective on the profound, multi-dimensional impacts of cybersecurity incidents. The overarching aim is to underscore the often-underestimated scope and depth of these breaches, emphasizing the entire timeline post-incident and the urgent need for fortified preventative and reactive measures in the digital domain.
文摘Personal cloud computing is an emerging trend in the computer industry. For a sustainable service, cloud computing services must control user access. The essential business characteristics of cloud computing are payment status and service level agreement. This work proposes a novel access control method for personal cloud service business. The proposed method sets metadata, policy analysis rules, and access denying rules. Metadata define the structure of access control policies and user requirements for cloud services. The policy analysis rules are used to compare conflicts and redundancies between access control policies. The access denying rules apply policies for inhibiting inappropriate access. The ontology is a theoretical foundation of this method. In this work, ontologies for payment status, access permission, service level, and the cloud provide semantic information needed to execute rules. A scenario of personal data backup cloud service is also provided in this work. This work potentially provides cloud service providers with a convenient method of controlling user access according to changeable business and marketing strategies.
文摘Cloud computing is providing IT services to its customer based on Service level agreements(SLAs).It is important for cloud service providers to provide reliable Quality of service(QoS)and to maintain SLAs accountability.Cloud service providers need to predict possible service violations before the emergence of an issue to perform remedial actions for it.Cloud users’major concerns;the factors for service reliability are based on response time,accessibility,availability,and speed.In this paper,we,therefore,experiment with the parallel mutant-Particle swarm optimization(PSO)for the detection and predictions of QoS violations in terms of response time,speed,accessibility,and availability.This paper also compares Simple-PSO and Parallel MutantPSO.In simulation results,it is observed that the proposed Parallel MutantPSO solution for cloud QoS violation prediction achieves 94%accuracy which is many accurate results and is computationally the fastest technique in comparison of conventional PSO technique.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(R.G.P-40-331),Received by Fuad A Al-Yarimi.www.kku.cdu.saThe authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this wprk through the General Research Project under grant number(R.G.P1/155/40).
文摘Cloud resource scheduling is gaining prominence with the increasingtrends of reliance on cloud infrastructure solutions. Numerous sets of cloudresource scheduling models were evident in the literature. Cloud resource scheduling refers to the distinct set of algorithms or programs the service providersengage to maintain the service level allocation for various resources over a virtualenvironment. The model proposed in this manuscript schedules resources of virtual machines under potential volatility aspects, which can be applied for anypriority metric chosen by the server administrators. Also, the model can be flexible for any time frame-based analysis of the load factor. The model discussed inthis manuscript relies on the Bollinger Bands tool for understanding the potentialvolatility aspects of a Virtual Machine. The experimental study of the model compared to the contemporary load balancing model called STLB (Starvation Threshold-based Load Balancing) refers to a simple and potential model that can bemore pragmatic for sustainable ways of load balancing.
文摘In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.
基金Project (No. NSC 99-2221-E-011-066-MY3) supported in part by the National Science Council,Taiwan
文摘From the viewpoint of service level agreements,data transmission accuracy is one of the critical performances for assessing Internet by service providers and enterprise customers.The stochastic computer network(SCN),in which each edge has several capacities and the accuracy rate,has multiple terminals.This paper is aimed mainly to evaluate the system reliability for an SCN,where system reliability is the probability that the demand can be fulfilled under the total accuracy rate.A minimal capacity vector allows the system to transmit demand to each terminal under the total accuracy rate.This study proposes an efficient algorithm to find all minimal capacity vectors by minimal paths.The system reliability can then be computed in terms of all minimal capacity vectors by the recursive sum of disjoint products(RSDP) algorithm.
文摘With the proliferation of cloud services and development of fine-grained virtualization techniques, the Cloud Management System (CMS) is required to manage multiple resources efficiently for the large-scale, highdensity computing units. Specifically, providing guaranteed networking Service Level Agreement (SLA) has become a challenge. This paper proposes MN-SLA (Modular Networking SLA), a framework to provide networking SLA and to enable its seamless integration with existing CMSes. Targeting at a modular, general, robust, and efficient design, MN-SLA abstracts general interacting Application Programming Interfaces (APIs) between CMS and SLA subsystem, and it is able to accomplish the integration with minor modifications to CMS. The evaluations based on large scale simulation show that the proposed networking SLA scheduling is promising in terms of resource utilization, being able to accommodate at least 1.4x the number of instances of its competitors.
文摘Cloud computing is a very promising paradigm of service-oriented computing. One major benefit of cloud computing is its elasticity, i.e., the system's capacity to provide and remove resources automatically at runtime. For that, it is essential to design and implement an efficient and effective technique that takes full advantage of the system's potential flexibility. This paper presents a non-intrusive approach that monitors the performance of relational database management systems in a cloud infrastructure, and automatically makes decisions to maximize the efficiency of the provider's environment while still satisfying agreed upon "service level agreements" (SLAs). Our experiments conducted on Amazon's cloud infrastructure, confirm that our technique is capable of automatically and dynamically adjusting the system's allocated resources observing the SLA.
基金Supported by the National Natural Science Foundation of China(61272454)
文摘Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.