Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr...The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.展开更多
With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,a...With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%.展开更多
Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environmen...Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of inte...This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
This research investigates the comparative efficacy of generating zero divisor graphs (ZDGs) of the ring of integers ℤ<sub>n</sub> modulo n using MAPLE algorithm. Zero divisor graphs, pivotal in the study ...This research investigates the comparative efficacy of generating zero divisor graphs (ZDGs) of the ring of integers ℤ<sub>n</sub> modulo n using MAPLE algorithm. Zero divisor graphs, pivotal in the study of ring theory, depict relationships between elements of a ring that multiply to zero. The paper explores the development and implementation of algorithms in MAPLE for constructing these ZDGs. The comparative study aims to discern the strengths, limitations, and computational efficiency of different MAPLE algorithms for creating zero divisor graphs offering insights for mathematicians, researchers, and computational enthusiasts involved in ring theory and mathematical computations.展开更多
Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the uns...Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the unstructured search problems with the time complexity of O(). In Grover’s algorithm, the key is Oracle and Amplitude Amplification. In this paper, our purpose is to show through examples that, in general, the time complexity of the Oracle Phase is O(N), not O(1). As a result, the time complexity of Grover’s algorithm is O(N), not O(). As a secondary purpose, we also attempt to restore the time complexity of Grover’s algorithm to its original form, O(), by introducing an O(1) parallel algorithm for unstructured search without repeated items, which will work for most cases. In the worst-case scenarios where the number of repeated items is O(N), the time complexity of the Oracle Phase is still O(N) even after additional preprocessing.展开更多
Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources secur...Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources security distribution model based on improved artificial firefly algorithm. First of all, according to characteristics of the artificial fireflies swarm algorithm and the complex method, it incorporates the ideas of complex method into the artificial firefly algorithm, uses the complex method to guide the search of artificial fireflies in population, and then introduces local search operator in the firefly mobile mechanism, in order to improve the searching efficiency and convergence precision of algorithm. Simulation results show that, the cloud computing resources security distribution model based on improved artificial firefly algorithm proposed in this paper has good convergence effect and optimum efficiency.展开更多
Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines wit...Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines with multi-innovation theory to improve standard DBN algorithm, that is the multi-innovation DBN(MI-DBN). It sets up a new model of back-propagation process in DBN algorithm, making the use of single innovation in previous algorithm extend to the use of innovation of the preceding multiple period, thus increasing convergence rate of error largely. To study the application of the algorithm in the social computing, and recognize the meaningful information about the handwritten numbers in social networking images. This paper compares MI-DBN algorithm with other representative classifiers through experiments. The result shows that MI-DBN algorithm, comparing with other representative classifiers, has a faster convergence rate and a smaller error for MNIST dataset recognition. And handwritten numbers on the image also have a precise degree of recognition.展开更多
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How...Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.展开更多
Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications...Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing environments.The motivation stems from the pressing issue of accommodating fluctuating levels of user demand efficiently.By adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy consumption.This reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing systems.To enhance the resource allocation process,we introduce a novel knowledge-based optimization algorithm.In this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly Algo-rithm.Our findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption reduction.This paper underscores the profound significance of resource allocation in the realm of cloud computing.By addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing systems.Our contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)...With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a c...With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
文摘The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.
基金supported by the Central University Basic Research Business Fee Fund Project(J2023-027)China Postdoctoral Science Foundation(No.2022M722248).
文摘With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%.
文摘Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
文摘This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
文摘This research investigates the comparative efficacy of generating zero divisor graphs (ZDGs) of the ring of integers ℤ<sub>n</sub> modulo n using MAPLE algorithm. Zero divisor graphs, pivotal in the study of ring theory, depict relationships between elements of a ring that multiply to zero. The paper explores the development and implementation of algorithms in MAPLE for constructing these ZDGs. The comparative study aims to discern the strengths, limitations, and computational efficiency of different MAPLE algorithms for creating zero divisor graphs offering insights for mathematicians, researchers, and computational enthusiasts involved in ring theory and mathematical computations.
文摘Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the unstructured search problems with the time complexity of O(). In Grover’s algorithm, the key is Oracle and Amplitude Amplification. In this paper, our purpose is to show through examples that, in general, the time complexity of the Oracle Phase is O(N), not O(1). As a result, the time complexity of Grover’s algorithm is O(N), not O(). As a secondary purpose, we also attempt to restore the time complexity of Grover’s algorithm to its original form, O(), by introducing an O(1) parallel algorithm for unstructured search without repeated items, which will work for most cases. In the worst-case scenarios where the number of repeated items is O(N), the time complexity of the Oracle Phase is still O(N) even after additional preprocessing.
文摘Based on the current cloud computing resources security distribution model’s problem that the optimization effect is not high and the convergence is not good, this paper puts forward a cloud computing resources security distribution model based on improved artificial firefly algorithm. First of all, according to characteristics of the artificial fireflies swarm algorithm and the complex method, it incorporates the ideas of complex method into the artificial firefly algorithm, uses the complex method to guide the search of artificial fireflies in population, and then introduces local search operator in the firefly mobile mechanism, in order to improve the searching efficiency and convergence precision of algorithm. Simulation results show that, the cloud computing resources security distribution model based on improved artificial firefly algorithm proposed in this paper has good convergence effect and optimum efficiency.
文摘Aimed at the problems of small gradient, low learning rate, slow convergence error when the DBN using back-propagation process to fix the network connection weight and bias, proposing a new algorithm that combines with multi-innovation theory to improve standard DBN algorithm, that is the multi-innovation DBN(MI-DBN). It sets up a new model of back-propagation process in DBN algorithm, making the use of single innovation in previous algorithm extend to the use of innovation of the preceding multiple period, thus increasing convergence rate of error largely. To study the application of the algorithm in the social computing, and recognize the meaningful information about the handwritten numbers in social networking images. This paper compares MI-DBN algorithm with other representative classifiers through experiments. The result shows that MI-DBN algorithm, comparing with other representative classifiers, has a faster convergence rate and a smaller error for MNIST dataset recognition. And handwritten numbers on the image also have a precise degree of recognition.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.
基金supported by the Ministerio Espanol de Ciencia e Innovación under Project Number PID2020-115570GB-C22 MCIN/AEI/10.13039/501100011033 and by the Cátedra de Empresa Tecnología para las Personas(UGR-Fujitsu).
文摘Cloud computing is a dynamic and rapidly evolving field,where the demand for resources fluctuates continuously.This paper delves into the imperative need for adaptability in the allocation of resources to applications and services within cloud computing environments.The motivation stems from the pressing issue of accommodating fluctuating levels of user demand efficiently.By adhering to the proposed resource allocation method,we aim to achieve a substantial reduction in energy consumption.This reduction hinges on the precise and efficient allocation of resources to the tasks that require those most,aligning with the broader goal of sustainable and eco-friendly cloud computing systems.To enhance the resource allocation process,we introduce a novel knowledge-based optimization algorithm.In this study,we rigorously evaluate its efficacy by comparing it to existing algorithms,including the Flower Pollination Algorithm(FPA),Spark Lion Whale Optimization(SLWO),and Firefly Algo-rithm.Our findings reveal that our proposed algorithm,Knowledge Based Flower Pollination Algorithm(KB-FPA),consistently outperforms these conventional methods in both resource allocation efficiency and energy consumption reduction.This paper underscores the profound significance of resource allocation in the realm of cloud computing.By addressing the critical issue of adaptability and energy efficiency,it lays the groundwork for a more sustainable future in cloud computing systems.Our contribution to the field lies in the introduction of a new resource allocation strategy,offering the potential for significantly improved efficiency and sustainability within cloud computing infrastructures.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金supported by the National Natural Science Foundation of China(Grant No.62072031)the Applied Basic Research Foundation of Yunnan Province(Grant No.2019FD071)the Yunnan Scientific Research Foundation Project(Grant 2019J0187).
文摘With the development of vehicles towards intelligence and connectivity,vehicular data is diversifying and growing dramatically.A task allocation model and algorithm for heterogeneous Intelligent Connected Vehicle(ICV)applications are proposed for the dispersed computing network composed of heterogeneous task vehicles and Network Computing Points(NCPs).Considering the amount of task data and the idle resources of NCPs,a computing resource scheduling model for NCPs is established.Taking the heterogeneous task execution delay threshold as a constraint,the optimization problem is described as the problem of maximizing the utilization of computing resources by NCPs.The proposed problem is proven to be NP-hard by using the method of reduction to a 0-1 knapsack problem.A many-to-many matching algorithm based on resource preferences is proposed.The algorithm first establishes the mutual preference lists based on the adaptability of the task requirements and the resources provided by NCPs.This enables the filtering out of un-schedulable NCPs in the initial stage of matching,reducing the solution space dimension.To solve the matching problem between ICVs and NCPs,a new manyto-many matching algorithm is proposed to obtain a unique and stable optimal matching result.The simulation results demonstrate that the proposed scheme can improve the resource utilization of NCPs by an average of 9.6%compared to the reference scheme,and the total performance can be improved by up to 15.9%.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金express their gratitude to the Higher Institution Centre of Excellence (HICoE) fund under the project code (JPT.S(BPKI)2000/016/018/015JId.4(21)/2022002HICOE)Universiti Tenaga Nasional (UNITEN) for funding the research through the (J510050002–IC–6 BOLDREFRESH2025)Akaun Amanah Industri Bekalan Elektrik (AAIBE) Chair of Renewable Energy grant,and NEC Energy Transition Grant (202203003ETG)。
文摘With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.