Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis...Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.展开更多
AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexami...AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.展开更多
Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing t...Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With t...A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With the establishment of the mathematic model of multi-UUT parallel test tasks and resources,the condition of multi-UUT resources mergence is analyzed to obtain minimum resource requirement under minimum test time.The definition of cost efficiency is put forward,followed by the design of gene coding and path selection project,which can satisfy multi-UUT parallel test tasks scheduling.At the threshold of the algorithm,GA is adopted to provide initial pheromone for ACA,and then dual-convergence pheromone feedback mode is applied in ACA to avoid local optimization and parameters dependence.The practical application proves that the algorithm has a remarkable effect on solving the problems of multi-UUT parallel test tasks scheduling and resources configuration.展开更多
A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Clu...A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Cluster Tree, is presented to calculate the slack time of each task in the task cluster. In the checkpointing scheme, the optimal checkpoint intervals which minimize the approximated failure probability are derived formally and validated experimentally. The complexity of approximated failure probability is quite small compared with that of the exact probability. Meanwhile, the consistency of the checkpointing is discussed also.展开更多
With the development of the mobile communication technology,a wide variety of envisioned intelligent transportation systems have emerged and put forward more stringent requirements for vehicular communications.Most of...With the development of the mobile communication technology,a wide variety of envisioned intelligent transportation systems have emerged and put forward more stringent requirements for vehicular communications.Most of computation-intensive and power-hungry applications result in a large amount of energy consumption and computation costs,which bring great challenges to the on-board system.It is necessary to exploit traffic offloading and scheduling in vehicular networks to ensure the Quality of Experience(QoE).In this paper,a joint offloading strategy based on quantum particle swarm optimization for the Mobile Edge Computing(MEC)enabled vehicular networks is presented.To minimize the delay cost and energy consumption,a task execution optimization model is formulated to assign the task to the available service nodes,which includes the service vehicles and the nearby Road Side Units(RSUs).For the task offloading process via Vehicle to Vehicle(V2V)communication,a vehicle selection algorithm is introduced to obtain an optimal offloading decision sequence.Next,an improved quantum particle swarm optimization algorithm for joint offloading is proposed to optimize the task delay and energy consumption.To maintain the diversity of the population,the crossover operator is introduced to exchange information among individuals.Besides,the crossover probability is defined to improve the search ability and convergence speed of the algorithm.Meanwhile,an adaptive shrinkage expansion factor is designed to improve the local search accuracy in the later iterations.Simulation results show that the proposed joint offloading strategy can effectively reduce the system overhead and the task completion delay under different system parameters.展开更多
Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task schedu...Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadli...Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadline, and popularity. However, the methods are inappropriate forachieving higher scheduling performance. Regarding data security, existingmethods use various encryption schemes but introduce significant serviceinterruption. This article sketches a practical Real-time Application CentricTRS (Throughput-Resource utilization–Success) Scheduling with Data Security(RATRSDS) model by considering all these issues in task scheduling anddata security. The method identifies the required resource and their claim timeby receiving the service requests. Further, for the list of resources as services,the method computes throughput support (Thrs) according to the number ofstatements executed and the complete statements of the service. Similarly, themethod computes Resource utilization support (Ruts) according to the idletime on any duty cycle and total servicing time. Also, the method computesthe value of Success support (Sus) according to the number of completions forthe number of allocations. The method estimates the TRS score (ThroughputResource utilization Success) for different resources using all these supportmeasures. According to the value of the TRS score, the services are rankedand scheduled. On the other side, based on the requirement of service requests,the method computes Requirement Support (RS). The selection of service isperformed and allocated. Similarly, choosing the route according to the RouteSupport Measure (RSM) enforced route security. Finally, data security hasgets implemented with a service-based encryption technique. The RATRSDSscheme has claimed higher performance in data security and scheduling.展开更多
The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In clo...The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.展开更多
Task scheduling plays a crucial role in cloud computing and is a key factor determining cloud computing performance.To solve the task scheduling problem for remote sensing data processing in cloud computing,this paper...Task scheduling plays a crucial role in cloud computing and is a key factor determining cloud computing performance.To solve the task scheduling problem for remote sensing data processing in cloud computing,this paper proposes a workflow task scheduling algorithm—Workflow Task Scheduling Algorithm based on Deep Reinforcement Learning(WDRL).The remote sensing data process modeling is transformed into a directed acyclic graph scheduling problem.Then,the algorithm is designed by establishing a Markov decision model and adopting a fitness calculation method.Finally,combine the advantages of reinforcement learning and deep neural networks to minimize make-time for remote sensing data processes from experience.The experiment is based on the development of CloudSim and Python and compares the change of completion time in the process of remote sensing data.The results showthat compared with several traditionalmeta-heuristic scheduling algorithms,WDRL can effectively achieve the goal of optimizing task scheduling efficiency.展开更多
Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskde...Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.展开更多
Cloud computing technology is favored by users because of its strong computing power and convenient services.At the same time,scheduling performance has an extremely efficient impact on promoting carbon neutrality.Cur...Cloud computing technology is favored by users because of its strong computing power and convenient services.At the same time,scheduling performance has an extremely efficient impact on promoting carbon neutrality.Currently,scheduling research in the multi-cloud environment aims to address the challenges brought by business demands to cloud data centers during peak hours.Therefore,the scheduling problem has promising application prospects under themulti-cloud environment.This paper points out that the currently studied scheduling problems in the multi-cloud environment mainly include independent task scheduling and workflow task scheduling based on the dependencies between tasks.This paper reviews the concepts,types,objectives,advantages,challenges,and research status of task scheduling in the multi-cloud environment.Task scheduling strategies proposed in the existing related references are analyzed,discussed,and summarized,including research motivation,optimization algorithm,and related objectives.Finally,the research status of the two kinds of task scheduling is compared,and several future important research directions of multi-cloud task scheduling are proposed.展开更多
Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately u...Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately utilize the hybrid cloud.However,most previous studies have not comprehensively optimized the performance of hybrid cloud task scheduling,even ignoring the conflicts between its security privacy features and other requirements.Based on the above problems,a many-objective hybrid cloud task scheduling optimization model(HCTSO)is constructed combining risk rate,resource utilization,total cost,and task completion time.Meanwhile,an opposition-based learning knee point-driven many-objective evolutionary algorithm(OBL-KnEA)is proposed to improve the performance of model solving.The algorithm uses opposition-based learning to generate initial populations for faster convergence.Furthermore,a perturbation-based multipoint crossover operator and a dynamic range mutation operator are designed to extend the search range.By comparing the experiments with other excellent algorithms on HCTSO,OBL-KnEA achieves excellent results in terms of evaluation metrics,initial populations,and model optimization effects.展开更多
The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in in...The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in industry and academia.The task-scheduling mechanisms can improve the fault-tolerance level of cloud systems.A task-scheduling mechanism distributes tasks to a group of instances to be executed.Much work has been undertaken in this direction to improve the overall outcome of cloud computing,such as improving service qual-ity and reducing power consumption.However,little work on task scheduling has studied the problem of lost tasks from the broker’s perspective.Task loss can hap-pen due to virtual machine failures,server crashes,connection interruption,etc.The broker-based concept means that the backup task can be allocated by the bro-ker on the same cloud service provider(CSP)or a different CSP to reduce costs,for example.This paper proposes a novel fault-tolerant mechanism that employs the primary backup(PB)model of task scheduling to address this issue.The pro-posed mechanism minimizes the impact of failure events by reducing the number of lost tasks.The mechanism is further improved to shorten the makespan time of submitted tasks in cloud systems.The experiments demonstrated that the pro-posed mechanism decreased the number of lost tasks by about 13%–15%com-pared with other mechanisms in the literature.展开更多
The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,...The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.展开更多
文摘Well organized datacentres with interconnected servers constitute the cloud computing infrastructure.User requests are submitted through an interface to these servers that provide service to them in an on-demand basis.The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category.Task scheduling in cloud poses numerous challenges impacting the cloud performance.If not handled properly,user satisfaction becomes questionable.More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment.The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution.An improvised seagull optimization algorithm which combines the features of the Cuckoo search(CS)and seagull optimization algorithm(SOA)had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment.The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment.Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization(MO-ACO),ACO and Min-Min algorithms.The proposed SOA-CS technique had produced an improvement of 1.06%,4.2%,and 2.4%for makespan and had reduced the overall cost to the extent of 1.74%,3.93%and 2.77%when compared with PSO,ACO,IDEA algorithms respectively when 300 vms are considered.The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.
基金This work was partly supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by theKorean government(MSIT)(No.2021-0-02068,Artificial Intelligence Innovation Hub)(No.RS-2022-00155966,Artificial Intelligence Convergence Innovation Human Resources Development(Ewha University)).
文摘AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.
文摘Cloud computing plays a significant role in Information Technology(IT)industry to deliver scalable resources as a service.One of the most important factor to increase the performance of the cloud server is maximizing the resource utilization in task scheduling.The main advantage of this scheduling is to max-imize the performance and minimize the time loss.Various researchers examined numerous scheduling methods to achieve Quality of Service(QoS)and to reduce execution time.However,it had disadvantages in terms of low throughput and high response time.Hence,this study aimed to schedule the task efficiently and to eliminate the faults in scheduling the tasks to the Virtual Machines(VMs).For this purpose,the research proposed novel Particle Swarm Optimization-Bandwidth Aware divisible Task(PSO-BATS)scheduling with Multi-Layered Regression Host Employment(MLRHE)to sort out the issues of task scheduling and ease the scheduling operation by load balancing.The proposed efficient sche-duling provides benefits to both cloud users and servers.The performance evalua-tion is undertaken with respect to cost,Performance Improvement Rate(PIR)and makespan which revealed the efficiency of the proposed method.Additionally,comparative analysis is undertaken which confirmed the performance of the intro-duced system than conventional system for scheduling tasks with highflexibility.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.
基金supported by“11th Five-year Projects”pre-research projects fund of the National Arming Department
文摘A Genetic Algorithm-Ant Colony Algorithm(GA-ACA),which can be used to optimize multi-Unit Under Test(UUT)parallel test tasks sequences and resources configuration quickly and accurately,is proposed in the paper.With the establishment of the mathematic model of multi-UUT parallel test tasks and resources,the condition of multi-UUT resources mergence is analyzed to obtain minimum resource requirement under minimum test time.The definition of cost efficiency is put forward,followed by the design of gene coding and path selection project,which can satisfy multi-UUT parallel test tasks scheduling.At the threshold of the algorithm,GA is adopted to provide initial pheromone for ACA,and then dual-convergence pheromone feedback mode is applied in ACA to avoid local optimization and parameters dependence.The practical application proves that the algorithm has a remarkable effect on solving the problems of multi-UUT parallel test tasks scheduling and resources configuration.
文摘A checkpointing scheme for relevant distributed real-time tasks which can be scheduled as a DAG is proposed. A typical algorithm, OSA, is selected for DAG scheduling. A new methods based a new structure, Scheduled Cluster Tree, is presented to calculate the slack time of each task in the task cluster. In the checkpointing scheme, the optimal checkpoint intervals which minimize the approximated failure probability are derived formally and validated experimentally. The complexity of approximated failure probability is quite small compared with that of the exact probability. Meanwhile, the consistency of the checkpointing is discussed also.
基金funded by National Natural Science Foundation of China (Grant number 62076106).
文摘With the development of the mobile communication technology,a wide variety of envisioned intelligent transportation systems have emerged and put forward more stringent requirements for vehicular communications.Most of computation-intensive and power-hungry applications result in a large amount of energy consumption and computation costs,which bring great challenges to the on-board system.It is necessary to exploit traffic offloading and scheduling in vehicular networks to ensure the Quality of Experience(QoE).In this paper,a joint offloading strategy based on quantum particle swarm optimization for the Mobile Edge Computing(MEC)enabled vehicular networks is presented.To minimize the delay cost and energy consumption,a task execution optimization model is formulated to assign the task to the available service nodes,which includes the service vehicles and the nearby Road Side Units(RSUs).For the task offloading process via Vehicle to Vehicle(V2V)communication,a vehicle selection algorithm is introduced to obtain an optimal offloading decision sequence.Next,an improved quantum particle swarm optimization algorithm for joint offloading is proposed to optimize the task delay and energy consumption.To maintain the diversity of the population,the crossover operator is introduced to exchange information among individuals.Besides,the crossover probability is defined to improve the search ability and convergence speed of the algorithm.Meanwhile,an adaptive shrinkage expansion factor is designed to improve the local search accuracy in the later iterations.Simulation results show that the proposed joint offloading strategy can effectively reduce the system overhead and the task completion delay under different system parameters.
基金Supported by Scientific and Technological Innovation Project of Chongqing(No.cstc2021jxjl20010)The Graduate Student Innovation Program of Chongqing University of Technology(No.clgycx-20203166,No.gzlcx20222061,No.gzlcx20223229)。
文摘Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
文摘Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadline, and popularity. However, the methods are inappropriate forachieving higher scheduling performance. Regarding data security, existingmethods use various encryption schemes but introduce significant serviceinterruption. This article sketches a practical Real-time Application CentricTRS (Throughput-Resource utilization–Success) Scheduling with Data Security(RATRSDS) model by considering all these issues in task scheduling anddata security. The method identifies the required resource and their claim timeby receiving the service requests. Further, for the list of resources as services,the method computes throughput support (Thrs) according to the number ofstatements executed and the complete statements of the service. Similarly, themethod computes Resource utilization support (Ruts) according to the idletime on any duty cycle and total servicing time. Also, the method computesthe value of Success support (Sus) according to the number of completions forthe number of allocations. The method estimates the TRS score (ThroughputResource utilization Success) for different resources using all these supportmeasures. According to the value of the TRS score, the services are rankedand scheduled. On the other side, based on the requirement of service requests,the method computes Requirement Support (RS). The selection of service isperformed and allocated. Similarly, choosing the route according to the RouteSupport Measure (RSM) enforced route security. Finally, data security hasgets implemented with a service-based encryption technique. The RATRSDSscheme has claimed higher performance in data security and scheduling.
文摘The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.
基金funded in part by the Key Research and Promotion Projects of Henan Province under Grant Nos.212102210079,222102210052,222102210007,and 222102210062.
文摘Task scheduling plays a crucial role in cloud computing and is a key factor determining cloud computing performance.To solve the task scheduling problem for remote sensing data processing in cloud computing,this paper proposes a workflow task scheduling algorithm—Workflow Task Scheduling Algorithm based on Deep Reinforcement Learning(WDRL).The remote sensing data process modeling is transformed into a directed acyclic graph scheduling problem.Then,the algorithm is designed by establishing a Markov decision model and adopting a fitness calculation method.Finally,combine the advantages of reinforcement learning and deep neural networks to minimize make-time for remote sensing data processes from experience.The experiment is based on the development of CloudSim and Python and compares the change of completion time in the process of remote sensing data.The results showthat compared with several traditionalmeta-heuristic scheduling algorithms,WDRL can effectively achieve the goal of optimizing task scheduling efficiency.
基金supported by theCommunication Soft Science Program of Ministry of Industry and Information Technology of China (No.2022-R-43)the Natural Science Basic Research Program of Shaanxi (No.2021JQ-719)Graduate Innovation Fund of Xi’an University of Posts and Telecommunications (No.CXJJZL2021014).
文摘Deploying task caching at edge servers has become an effectiveway to handle compute-intensive and latency-sensitive tasks on the industrialinternet. However, how to select the task scheduling location to reduce taskdelay and cost while ensuring the data security and reliable communicationof edge computing remains a challenge. To solve this problem, this paperestablishes a task scheduling model with joint blockchain and task cachingin the industrial internet and designs a novel blockchain-assisted cachingmechanism to enhance system security. In this paper, the task schedulingproblem, which couples the task scheduling decision, task caching decision,and blockchain reward, is formulated as the minimum weighted cost problemunder delay constraints. This is a mixed integer nonlinear problem, which isproved to be nonconvex and NP-hard. To solve the optimal solution, thispaper proposes a task scheduling strategy algorithm based on an improvedgenetic algorithm (IGA-TSPA) by improving the genetic algorithm initializationand mutation operations to reduce the size of the initial solutionspace and enhance the optimal solution convergence speed. In addition,an Improved Least Frequently Used algorithm is proposed to improve thecontent hit rate. Simulation results show that IGA-TSPA has a faster optimalsolution-solving ability and shorter running time compared with the existingedge computing scheduling algorithms. The established task scheduling modelnot only saves 62.19% of system overhead consumption in comparison withlocal computing but also has great significance in protecting data security,reducing task processing delay, and reducing system cost.
基金supported by Science and Technology Development Foundation of the Central Guiding Local under Grant No.YDZJSX2021A038the National Natural Science Foundation of China under Grant No.61806138China University Industry-University-Research Collaborative Innovation Fund (Future Network Innovation Research and Application Project)under Grant No.2021FNA04014.
文摘Cloud computing technology is favored by users because of its strong computing power and convenient services.At the same time,scheduling performance has an extremely efficient impact on promoting carbon neutrality.Currently,scheduling research in the multi-cloud environment aims to address the challenges brought by business demands to cloud data centers during peak hours.Therefore,the scheduling problem has promising application prospects under themulti-cloud environment.This paper points out that the currently studied scheduling problems in the multi-cloud environment mainly include independent task scheduling and workflow task scheduling based on the dependencies between tasks.This paper reviews the concepts,types,objectives,advantages,challenges,and research status of task scheduling in the multi-cloud environment.Task scheduling strategies proposed in the existing related references are analyzed,discussed,and summarized,including research motivation,optimization algorithm,and related objectives.Finally,the research status of the two kinds of task scheduling is compared,and several future important research directions of multi-cloud task scheduling are proposed.
基金supported by National Natural Science Foundation of China(Grant No.61806138)the Central Government Guides Local Science and Technology Development Funds(Grant No.YDZJSX2021A038)+2 种基金Key RD Program of Shanxi Province(International Cooperation)under Grant No.201903D421048Outstanding Innovation Project for Graduate Students of Taiyuan University of Science and Technology(Project No.XCX211004)China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately utilize the hybrid cloud.However,most previous studies have not comprehensively optimized the performance of hybrid cloud task scheduling,even ignoring the conflicts between its security privacy features and other requirements.Based on the above problems,a many-objective hybrid cloud task scheduling optimization model(HCTSO)is constructed combining risk rate,resource utilization,total cost,and task completion time.Meanwhile,an opposition-based learning knee point-driven many-objective evolutionary algorithm(OBL-KnEA)is proposed to improve the performance of model solving.The algorithm uses opposition-based learning to generate initial populations for faster convergence.Furthermore,a perturbation-based multipoint crossover operator and a dynamic range mutation operator are designed to extend the search range.By comparing the experiments with other excellent algorithms on HCTSO,OBL-KnEA achieves excellent results in terms of evaluation metrics,initial populations,and model optimization effects.
基金supported by the Deanship of Scientific Research at Prince Sattam Bin Abdulaziz University under research Project No.2018/01/9371.
文摘The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in industry and academia.The task-scheduling mechanisms can improve the fault-tolerance level of cloud systems.A task-scheduling mechanism distributes tasks to a group of instances to be executed.Much work has been undertaken in this direction to improve the overall outcome of cloud computing,such as improving service qual-ity and reducing power consumption.However,little work on task scheduling has studied the problem of lost tasks from the broker’s perspective.Task loss can hap-pen due to virtual machine failures,server crashes,connection interruption,etc.The broker-based concept means that the backup task can be allocated by the bro-ker on the same cloud service provider(CSP)or a different CSP to reduce costs,for example.This paper proposes a novel fault-tolerant mechanism that employs the primary backup(PB)model of task scheduling to address this issue.The pro-posed mechanism minimizes the impact of failure events by reducing the number of lost tasks.The mechanism is further improved to shorten the makespan time of submitted tasks in cloud systems.The experiments demonstrated that the pro-posed mechanism decreased the number of lost tasks by about 13%–15%com-pared with other mechanisms in the literature.
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195)Taif University,Saudi Arabia.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.