Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance...Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.展开更多
Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How...Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications...Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing environments.The motivation stems from the pressing issue of accommodating fluctuating levels of user demand efficiently.By adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy consumption.This reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing systems.To enhance the resource allocation process,we introduce a novel knowledge-based optimization algorithm.In this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly Algo-rithm.Our findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption reduction.This paper underscores the profound significance of resource allocation in the realm of cloud computing.By addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing systems.Our contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures.展开更多
Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption i...Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.展开更多
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr...The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage p...As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.展开更多
Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the...Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the technical and business issues surrounding cloudbased live streaming is provided,including the benefits of cloud computing,the various live streaming architectures,and the challenges that live streaming service providers face in delivering high‐quality,real‐time services.The different techniques used to improve the performance of video streaming,such as adaptive bit‐rate streaming,multicast distribution,and edge computing are discussed and the necessity of low‐latency and high‐quality video transmission in cloud‐based live streaming is underlined.Issues such as improving user experience and live streaming service performance using cutting‐edge technology,like artificial intelligence and machine learning are discussed.In addition,the legal and regulatory implications of cloud‐based live streaming,including issues with network neutrality,data privacy,and content moderation are addressed.The future of cloud computing for live streaming is examined in the section that follows,and it looks at the most likely new developments in terms of trends and technology.For technology vendors,live streaming service providers,and regulators,the findings have major policy‐relevant implications.Suggestions on how stakeholders should address these concerns and take advantage of the potential presented by this rapidly evolving sector,as well as insights into the key challenges and opportunities associated with cloud‐based live streaming are provided.展开更多
A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is stil...A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data.展开更多
Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing o...Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing of storage and computing resources. A hybrid cloud environment is an excellent example of cloud computing. Specifically, the hybrid system provides organizations with increased scalability and control over their data and support for a remote workforce. However, hybrid cloud systems are expensive as organizations operate different infrastructures while introducing complexity to the organization’s activities. Data security is critical among the most vital concerns that have resulted from the use of cloud computing, thus, affecting the rate of user adoption and acceptance. This article, borrowing from the hybrid cloud computing system, recommends combining traditional and modern data security systems. Traditional data security systems have proven effective in their respective roles, with the main challenge arising from their recognition of context and connectivity. Therefore, integrating traditional and modern designs is recommended to enhance effectiveness, context, connectivity, and efficiency.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of inte...This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.展开更多
This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering...This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering various cloud infrastructure offerings and security models. Additionally, it deeply analyzes real-life case studies illustrating successful cloud migrations and highlights common information security threats in current cloud computing. The article concludes by offering recommendations to businesses to protect themselves from cloud data breaches and providing insights into selecting a suitable cloud services provider from an information security perspective.展开更多
Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate des...Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.展开更多
Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud...Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud Computing,resulting in data processing and management inefficiency.This paper proposes a simplified and improved kernel on an x86 system designed for Big Data and Cloud Computing purposes.The proposed algorithm utilizes the performance benefits from the improved Input/Output(I/O)performance.The performance engineering runs the data-oriented design on traditional data management to improve data processing speed by reducing memory access overheads in conventional data management.The OS incorporates a data-oriented design to“modernize”various Data Science and management aspects.The resulting OS contains a basic input/output system(BIOS)bootloader that boots into Intel 32-bit protected mode,a text display terminal,4 GB paging memory,4096 heap block size,a Hard Disk Drive(HDD)I/O Advanced Technology Attachment(ATA)driver and more.There are also I/O scheduling algorithm prototypes that demonstrate how a simple Sweeping algorithm is superior to more conventionally known I/O scheduling algorithms.A MapReduce prototype is implemented using Message Passing Interface(MPI)for big data purposes.An attempt was made to optimize binary search using modern performance engineering and data-oriented design.展开更多
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金National Key R&D Program of China,Grant/Award Number:2018AAA0102100National Natural Science Foundation of China,Grant/Award Numbers:62177047,U22A2034+6 种基金International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province,Grant/Award Number:2021CB1013Key Research and Development Program of Hunan Province,Grant/Award Number:2022SK2054111 Project,Grant/Award Number:B18059Natural Science Foundation of Hunan Province,Grant/Award Number:2022JJ30762Fundamental Research Funds for the Central Universities of Central South University,Grant/Award Number:2020zzts143Scientific and Technological Innovation Leading Plan of High‐tech Industry of Hunan Province,Grant/Award Number:2020GK2021Central South University Research Program of Advanced Interdisciplinary Studies,Grant/Award Number:2023QYJC020。
文摘Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22 MCIN/AEI/10.13039/501100011033 and by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing environments.The motivation stems from the pressing issue of accommodating fluctuating levels of user demand efficiently.By adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy consumption.This reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing systems.To enhance the resource allocation process,we introduce a novel knowledge-based optimization algorithm.In this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly Algo-rithm.Our findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption reduction.This paper underscores the profound significance of resource allocation in the realm of cloud computing.By addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing systems.Our contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the Project Number(PSAU/2023/01/27268).
文摘Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
文摘The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22,MCIN/AEI/10.13039/501100011033by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘As cloud computing usage grows,cloud data centers play an increasingly important role.To maximize resource utilization,ensure service quality,and enhance system performance,it is crucial to allocate tasks and manage performance effectively.The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers.The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies,categories,and gaps.A literature review was conducted,which included the analysis of 463 task allocations and 480 performance management papers.The review revealed three task allocation research topics and seven performance management methods.Task allocation research areas are resource allocation,load-Balancing,and scheduling.Performance management includes monitoring and control,power and energy management,resource utilization optimization,quality of service management,fault management,virtual machine management,and network management.The study proposes new techniques to enhance cloud computing work allocation and performance management.Short-comings in each approach can guide future research.The research’s findings on cloud data center task allocation and performance management can assist academics,practitioners,and cloud service providers in optimizing their systems for dependability,cost-effectiveness,and scalability.Innovative methodologies can steer future research to fill gaps in the literature.
文摘Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the technical and business issues surrounding cloudbased live streaming is provided,including the benefits of cloud computing,the various live streaming architectures,and the challenges that live streaming service providers face in delivering high‐quality,real‐time services.The different techniques used to improve the performance of video streaming,such as adaptive bit‐rate streaming,multicast distribution,and edge computing are discussed and the necessity of low‐latency and high‐quality video transmission in cloud‐based live streaming is underlined.Issues such as improving user experience and live streaming service performance using cutting‐edge technology,like artificial intelligence and machine learning are discussed.In addition,the legal and regulatory implications of cloud‐based live streaming,including issues with network neutrality,data privacy,and content moderation are addressed.The future of cloud computing for live streaming is examined in the section that follows,and it looks at the most likely new developments in terms of trends and technology.For technology vendors,live streaming service providers,and regulators,the findings have major policy‐relevant implications.Suggestions on how stakeholders should address these concerns and take advantage of the potential presented by this rapidly evolving sector,as well as insights into the key challenges and opportunities associated with cloud‐based live streaming are provided.
文摘A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data.
文摘Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing of storage and computing resources. A hybrid cloud environment is an excellent example of cloud computing. Specifically, the hybrid system provides organizations with increased scalability and control over their data and support for a remote workforce. However, hybrid cloud systems are expensive as organizations operate different infrastructures while introducing complexity to the organization’s activities. Data security is critical among the most vital concerns that have resulted from the use of cloud computing, thus, affecting the rate of user adoption and acceptance. This article, borrowing from the hybrid cloud computing system, recommends combining traditional and modern data security systems. Traditional data security systems have proven effective in their respective roles, with the main challenge arising from their recognition of context and connectivity. Therefore, integrating traditional and modern designs is recommended to enhance effectiveness, context, connectivity, and efficiency.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
文摘This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.
文摘This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering various cloud infrastructure offerings and security models. Additionally, it deeply analyzes real-life case studies illustrating successful cloud migrations and highlights common information security threats in current cloud computing. The article concludes by offering recommendations to businesses to protect themselves from cloud data breaches and providing insights into selecting a suitable cloud services provider from an information security perspective.
基金This work was supported in part by the Natural Science Foundation of the Education Department of Henan Province(Grant 22A520025)the National Natural Science Foundation of China(Grant 61975053)the National Key Research and Development of Quality Information Control Technology for Multi-Modal Grain Transportation Efficient Connection(2022YFD2100202).
文摘Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.
文摘Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud Computing,resulting in data processing and management inefficiency.This paper proposes a simplified and improved kernel on an x86 system designed for Big Data and Cloud Computing purposes.The proposed algorithm utilizes the performance benefits from the improved Input/Output(I/O)performance.The performance engineering runs the data-oriented design on traditional data management to improve data processing speed by reducing memory access overheads in conventional data management.The OS incorporates a data-oriented design to“modernize”various Data Science and management aspects.The resulting OS contains a basic input/output system(BIOS)bootloader that boots into Intel 32-bit protected mode,a text display terminal,4 GB paging memory,4096 heap block size,a Hard Disk Drive(HDD)I/O Advanced Technology Attachment(ATA)driver and more.There are also I/O scheduling algorithm prototypes that demonstrate how a simple Sweeping algorithm is superior to more conventionally known I/O scheduling algorithms.A MapReduce prototype is implemented using Message Passing Interface(MPI)for big data purposes.An attempt was made to optimize binary search using modern performance engineering and data-oriented design.