期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Auto-Scaling Framework for Enhancing the Quality of Service in the Mobile Cloud Environments
1
作者 Yogesh Kumar Jitender Kumar Poonam Sheoran 《Computers, Materials & Continua》 SCIE EI 2023年第6期5785-5800,共16页
On-demand availability and resource elasticity features of Cloud computing have attracted the focus of various research domains.Mobile cloud computing is one of these domains where complex computation tasks are offloa... On-demand availability and resource elasticity features of Cloud computing have attracted the focus of various research domains.Mobile cloud computing is one of these domains where complex computation tasks are offloaded to the cloud resources to augment mobile devices’cognitive capacity.However,the flexible provisioning of cloud resources is hindered by uncertain offloading workloads and significant setup time of cloud virtual machines(VMs).Furthermore,any delays at the cloud end would further aggravate the miseries of real-time tasks.To resolve these issues,this paper proposes an auto-scaling framework(ACF)that strives to maintain the quality of service(QoS)for the end users as per the service level agreement(SLA)negotiated assurance level for service availability.In addition,it also provides an innovative solution for dealing with the VM startup overheads without truncating the running tasks.Unlike the waiting cost and service cost tradeoff-based systems or threshold-rule-based systems,it does not require strict tuning in the waiting costs or in the threshold rules for enhancing the QoS.We explored the design space of the ACF system with the CloudSim simulator.The extensive sets of experiments demonstrate the effectiveness of the ACF system in terms of good reduction in energy dissipation at the mobile devices and improvement in the QoS.At the same time,the proposed ACF system also reduces the monetary costs of the service providers. 展开更多
关键词 auto-scaling computation offloading mobile cloud computing quality of service service level agreement
下载PDF
Towards Yo-Yo attack mitigation in cloud auto-scaling mechanism
2
作者 Xiaoqiong Xu Jin Li +3 位作者 Hongfang Yu Long Luo Xuetao Wei Gang Sun 《Digital Communications and Networks》 SCIE 2020年第3期369-376,共8页
Cloud platforms could automatically scale underlying network resources up and down in response to changes in the traffic load.Such an auto-scaling mechanism can largely enhance the elasticity and scalability of cloud ... Cloud platforms could automatically scale underlying network resources up and down in response to changes in the traffic load.Such an auto-scaling mechanism can largely enhance the elasticity and scalability of cloud platforms.However,it may introduce new security threats.For example,the Yo-Yo attack is a newly disclosed attack against the cloud auto-scaling mechanism.Attackers periodically send bursts of traffic to cause the autoscaling mechanism to oscillate between the scale-up process and the scale-down process,which may result in significant performance degradation and economic loss.None of the prior work addressed the problem of mitigating such an attack.In this paper,we propose a Trust-based Adversarial Scanner Delaying(TASD)approach to effectively and proactively mitigate the Yo-Yo attack on the cloud auto-scaling mechanism.In TASD,we first propose to use the trust-based scheme to establish trust values for users,which is leveraged to identify adversarial requests.Trust values are updated by jointly considering the request mode and the auto-scaling status.Then,we aim to disable the condition under which the Yo-Yo attack takes effect by injecting certain delay,under the QoS constraints,to manipulate the response time of suspicious requests and deceive the attackers.Our extensive evaluation demonstrates that our approach achieves promising results,e.g.,it can detect at least 80%Yo-Yo adversarial users and reduce more than 41%malicious scale-ups. 展开更多
关键词 Cloud computing auto-scaling mechanism Yo-yo attack Attack detection Attack defense
下载PDF
A Hybrid Approach for Performance and Energy-Based Cost Prediction in Clouds
3
作者 Mohammad Aldossary 《Computers, Materials & Continua》 SCIE EI 2021年第9期3531-3562,共32页
With the striking rise in penetration of Cloud Computing,energy consumption is considered as one of the key cost factors that need to be managed within cloud providers’infrastructures.Subsequently,recent approaches a... With the striking rise in penetration of Cloud Computing,energy consumption is considered as one of the key cost factors that need to be managed within cloud providers’infrastructures.Subsequently,recent approaches and strategies based on reactive and proactive methods have been developed for managing cloud computing resources,where the energy consumption and the operational costs are minimized.However,to make better cost decisions in these strategies,the performance and energy awareness should be supported at both Physical Machine(PM)and Virtual Machine(VM)levels.Therefore,in this paper,a novel hybrid approach is proposed,which jointly considered the prediction of performance variation,energy consumption and cost of heterogeneous VMs.This approach aims to integrate auto-scaling with live migration as well as maintain the expected level of service performance,in which the power consumption and resource usage are utilized for estimating the VMs’total cost.Specifically,the service performance variation is handled by detecting the underloaded and overloaded PMs;thereby,the decision(s)is made in a cost-effective manner.Detailed testbed evaluation demonstrates that the proposed approach not only predicts the VMs workload and consumption of power but also estimates the overall cost of live migration and auto-scaling during service operation,with a high prediction accuracy on the basis of historical workload patterns. 展开更多
关键词 Cloud computing energy efficiency auto-scaling live migration workload prediction energy prediction cost estimation
下载PDF
A Review of Dynamic Resource Management in Cloud Computing Environments
4
作者 Mohammad Aldossary 《Computer Systems Science & Engineering》 SCIE EI 2021年第3期461-476,共16页
In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order tor... In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources. 展开更多
关键词 Cloud computing resource management VM consolidation live migration resource provisioning auto-scaling
下载PDF
Exploring serverless computing for stream analytic
5
作者 成英超 Hao Zhifeng Cai Ruichu 《High Technology Letters》 EI CAS 2020年第1期17-24,共8页
This work proposes ARS(FaaS) serverless framework scheduling and provisioning resources for streaming applications autonomously, which ensures real-time response on unpredictable and fluctuating streaming data. A HPC ... This work proposes ARS(FaaS) serverless framework scheduling and provisioning resources for streaming applications autonomously, which ensures real-time response on unpredictable and fluctuating streaming data. A HPC cloud platform is used as a de facto platform, on which serverless computing for stream analytic is explored. This work enables application developers to build and run steaming applications without worrying about servers, which means that the developers are able to focus on application features instead of scheduling and provisioning resources of the infrastructure. The serverless computing framework, ARS(FaaS), provides function-as-a-service to make the developers write code in discrete event-driven functions. ARS(FaaS) is capable of running and scaling the developer's code automatically, according to the throughput of streaming events. The major contribution of this serverless framework is effective and efficient autonomous resource scheduling for real-time streaming analytic, which enables the developers to build applications faster with autonomous resource scheduling. ARS(FaaS) framework is appropriate for real-time and stream analytic on event-driven data with spiky and variable compute requirements. 展开更多
关键词 serverless steam processing HPC cloud auto-scaling function-as-a-service(FaaS)
下载PDF
Auto—SC—一个自动开关电容梯形滤波器的设计程序
6
作者 Nair.,DG 吴葆元 《南邮科技译丛》 1989年第3期66-69,共4页
关键词 auto-sc 开关电路 梯形 滤波器
全文增补中
上一页 1 下一页 到第
使用帮助 返回顶部