For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by ...For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.展开更多
The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is...The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.展开更多
Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial....Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigo...Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-e...With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-end services.In recent years,we have witnessed many research efforts that strive to optimize the performance of cloud computing system in this new era.This paper provides an overview of existing works on recent system performance optimization techniques and classify them based on their design focuses.We also identify open issues and challenges in this important research direction.展开更多
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption an...In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.展开更多
The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud comp...The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud computing(CC)system and centralised control for data storage and processing in realtime operation and control.The integration of edge computing(EC)can effectively alleviate the pressure and conduct real-time processing while ensuring data security.This paper conducts an extensive review of the EC-CC computing system and its application to the smart grid,which will integrate a vast number of dispersed devices.It first comprehensively describes the relationship among CC,fog computing(FC),and EC to provide a theoretical basis for the differentiation.It then introduces the architecture of the EC-CC computing system in the smart grid,where the architecture consists of both hardware structure and software platforms,and key technologies are introduced to support functionalities.Thereafter,the application to the smart grid is discussed across the whole supply chain,including energy generation,transportation(transmission and distribution networks),and consumption.Finally,future research opportunities and challenges of EC-CC while being applied to the smart grid are outlined.This paper can inform future research and industrial exploitations of these new technologies to enable a highly efficient smart grid under decarbonisation,digitalisation,and decentralisation transitions.展开更多
Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption i...Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ...Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.展开更多
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance...Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.展开更多
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How...Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr...The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.展开更多
As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage p...As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.展开更多
基金The National Natural Science Foundation of China(No.61202458,61403109)the Natural Science Foundation of Heilongjiang Province of China(No.F2017021)Harbin Science and Technology Innovation Research Funds(No.2016RAQXJ036)
文摘For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.
文摘The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.
基金supported by Beijing Natural Science Foundation (4174100)NSFC(61602054)the Fundamental Research Funds for the Central Universities
文摘Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
文摘Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
基金sponsored by the National Natural Science Foundation of China (Grant No.61972247).
文摘With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-end services.In recent years,we have witnessed many research efforts that strive to optimize the performance of cloud computing system in this new era.This paper provides an overview of existing works on recent system performance optimization techniques and classify them based on their design focuses.We also identify open issues and challenges in this important research direction.
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
基金supported by National Natural Science Foundation of China under Grant No.60873231Natural Science Foundation of Jiangsu Province under Grant No.BK2009426+1 种基金Major State Basic Research Development Program of China under Grant No.2011CB302903Key University Science Research Project of Jiangsu Province under Grant No.11KJA520002
文摘In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.
文摘The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud computing(CC)system and centralised control for data storage and processing in realtime operation and control.The integration of edge computing(EC)can effectively alleviate the pressure and conduct real-time processing while ensuring data security.This paper conducts an extensive review of the EC-CC computing system and its application to the smart grid,which will integrate a vast number of dispersed devices.It first comprehensively describes the relationship among CC,fog computing(FC),and EC to provide a theoretical basis for the differentiation.It then introduces the architecture of the EC-CC computing system in the smart grid,where the architecture consists of both hardware structure and software platforms,and key technologies are introduced to support functionalities.Thereafter,the application to the smart grid is discussed across the whole supply chain,including energy generation,transportation(transmission and distribution networks),and consumption.Finally,future research opportunities and challenges of EC-CC while being applied to the smart grid are outlined.This paper can inform future research and industrial exploitations of these new technologies to enable a highly efficient smart grid under decarbonisation,digitalisation,and decentralisation transitions.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the Project Number(PSAU/2023/01/27268).
文摘Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant No.61521003the National Natural Science Foundation of China under Grant No.62072467 and 62002383.
文摘Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.
基金National Key R&D Program of China,Grant/Award Number:2018AAA0102100National Natural Science Foundation of China,Grant/Award Numbers:62177047,U22A2034+6 种基金International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province,Grant/Award Number:2021CB1013Key Research and Development Program of Hunan Province,Grant/Award Number:2022SK2054111 Project,Grant/Award Number:B18059Natural Science Foundation of Hunan Province,Grant/Award Number:2022JJ30762Fundamental Research Funds for the Central Universities of Central South University,Grant/Award Number:2020zzts143Scientific and Technological Innovation Leading Plan of High‐tech Industry of Hunan Province,Grant/Award Number:2020GK2021Central South University Research Program of Advanced Interdisciplinary Studies,Grant/Award Number:2023QYJC020。
文摘Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
文摘The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22,MCIN/AEI/10.13039/501100011033by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.