AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexami...AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.展开更多
Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately u...Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately utilize the hybrid cloud.However,most previous studies have not comprehensively optimized the performance of hybrid cloud task scheduling,even ignoring the conflicts between its security privacy features and other requirements.Based on the above problems,a many-objective hybrid cloud task scheduling optimization model(HCTSO)is constructed combining risk rate,resource utilization,total cost,and task completion time.Meanwhile,an opposition-based learning knee point-driven many-objective evolutionary algorithm(OBL-KnEA)is proposed to improve the performance of model solving.The algorithm uses opposition-based learning to generate initial populations for faster convergence.Furthermore,a perturbation-based multipoint crossover operator and a dynamic range mutation operator are designed to extend the search range.By comparing the experiments with other excellent algorithms on HCTSO,OBL-KnEA achieves excellent results in terms of evaluation metrics,initial populations,and model optimization effects.展开更多
By combining fault-tolerance with power management, this paper developed a new method for aperiodic task set for the problem of task scheduling and voltage allocation in embedded real-time systems. The scbedulability ...By combining fault-tolerance with power management, this paper developed a new method for aperiodic task set for the problem of task scheduling and voltage allocation in embedded real-time systems. The scbedulability of the system was analyzed through checkpointing and the energy saving was considered via dynamic voltage and frequency scaling. Simulation results showed that the proposed algorithm had better performance compared with the existing voltage allocation techniques. The proposed technique saves 51.5% energy over FT-Only and 19.9% over FT + EC on average. Therefore, the proposed method was more appropriate for aperiodic tasks in embedded real-time systems.展开更多
Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance...Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance and urgency(e.g.,observation tasks orienting to the earthquake area and military conflict area),have not been taken into account yet.Therefore,it is crucial to investigate the satellite integrated scheduling methods,which focus on meeting the requirements of emergency tasks while maximizing the profit of common tasks.Firstly,a pretreatment approach is proposed,which eliminates conflicts among emergency tasks and allocates all tasks with a potential time-window to related orbits of satellites.Secondly,a mathematical model and an acyclic directed graph model are constructed.Thirdly,a hybrid ant colony optimization method mixed with iteration local search(ACO-ILS) is established to solve the problem.Moreover,to guarantee all solutions satisfying the emergency task requirement constraints,a constraint repair method is presented.Extensive experimental simulations show that the proposed integrated scheduling method is superior to two-phased scheduling methods,the performance of ACO-ILS is greatly improved in both evolution speed and solution quality by iteration local search,and ACO-ILS outperforms both genetic algorithm and simulated annealing algorithm.展开更多
A reservation-based feedback scheduling (FS-CBS) of a set of model predictive control (MPC) tasks is presented to optimize the global control performance subject to limited computational resource. Implemented as a...A reservation-based feedback scheduling (FS-CBS) of a set of model predictive control (MPC) tasks is presented to optimize the global control performance subject to limited computational resource. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The FS-CBS is shown robust against the varying of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.展开更多
Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational schedu...Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational scheduling (GS) and communication inclusion generational scheduling (CIGS) algorithms. Finally, an improved CIGS algorithm is proposed to use in grid computing environment, and it has been proved effectively.展开更多
Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be tr...Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FS- CBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.展开更多
Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given appl...Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given application has to be guaranteed, the recharge rate of obtaining energy inevitably affects the task scheduling. This paper extends our previous works in?[1] [2] to explore the real-time task assignment problem on an energy-harvesting distributed system. The solution using Ant Colony Optimization (ACO) and several significant improvements are presented. Simulations compare the performance of the approaches, which demonstrate the solutions effectiveness and efficiency.展开更多
In the previous work of garbage collection (GC) models, scheduling analysis was given based on an assumption that there were no aperiodic mutator tasks. However, it is not true in practical real-time systems. The GC...In the previous work of garbage collection (GC) models, scheduling analysis was given based on an assumption that there were no aperiodic mutator tasks. However, it is not true in practical real-time systems. The GC algorithm which can schedule aperiodic tasks is proposed, and the variance of live memory is analyzed. In this algorithm, active tasks are deferred to be processed by GC until the states of tasks become inactive, and the saved sporadic server time can be used to schedule aperiodic tasks. Scheduling the sample task sets demonstrates that this algorithm in this paper can schedule aperiodic tasks and decrease GC work. Thus, the GC algorithm proposed is more flexible and portable.展开更多
In the imaging observation system, imaging task scheduling is an important topic. Most scholars study the imaging task scheduling from the perspective of static priority, and only a few from the perspective of dynamic...In the imaging observation system, imaging task scheduling is an important topic. Most scholars study the imaging task scheduling from the perspective of static priority, and only a few from the perspective of dynamic priority. However,the priority of the imaging task is dynamic in actual engineering. To supplement the research on imaging observation, this paper proposes the task priority model, dynamic scheduling strategy and Heuristic algorithm. At first, this paper analyzes the relevant theoretical basis of imaging observation, decomposes the task priority into four parts, including target priority, imaging task priority, track, telemetry & control(TT&C)requirement priority and data transmission requirement priority, summarizes the attribute factors that affect the above four types of priority in detail, and designs the corresponding priority model. Then, this paper takes the emergency tasks scheduling problem as the background, proposes the dynamic scheduling strategy and heuristic algorithm. Finally, the task priority model,dynamic scheduling strategy and heuristic algorithm are verified by experiments.展开更多
Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computa...Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.展开更多
This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Pr...This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Providers(CSPs),the research focuses on minimizing job completion delays through efficient task allocation.Utilizing Johnson’s rule from operations research,the study addresses the challenge of resource availability post-task completion.It advocates for queuing models with multiple servers and finite capacity to improve job scheduling models,subsequently reducing wait times and queue lengths.The Dynamic Johnson Sequencing Algorithm and the M/M/c/K queuing model are applied to optimize task sequences,showcasing their efficacy through comparative analysis.The research evaluates the impact of makespan calculation on data file transfer times and assesses vital performance indicators,ultimately positioning the proposed technique as superior to existing approaches,offering a robust framework for enhanced task scheduling and resource allocation in cloud computing.展开更多
Multi-core processor is widely used as the running platform for safety-critical real-time systems such as spacecraft,and various types of real-time tasks are dynamically added at runtime.In order to improve the utiliz...Multi-core processor is widely used as the running platform for safety-critical real-time systems such as spacecraft,and various types of real-time tasks are dynamically added at runtime.In order to improve the utilization of multi-core processors and ensure the real-time performance of the system,it is necessary to adopt a reasonable real-time task allocation method,but the existing methods are only for single-core processors or the performance is too low to be applicable.Aiming at the task allocation problem when mixed real-time tasks are dynamically added,we propose a heuristic mixed real-time task allocation algorithm of virtual utilization VU-WF(Virtual Utilization Worst Fit)in multi-core processor.First,a 4-tuple task model is established to describe the fixedpoint task and the sporadic task in a unified manner.Then,a VDS(Virtual Deferral Server)for serving execution requests of fixed-point task is constructed and a schedulability test of the mixed task set is derived.Finally,combined with the analysis of VDS's capacity,VU-WF is proposed,which selects cores in ascending order of virtual utilization for the schedulability test.Experiments show that the overall performance of VU-WF is better than available algorithms,not only has a good schedulable ratio and load balancing but also has the lowest runtime overhead.In a 4-core processor,compared with available algorithms of the same schedulability ratio,the load balancing is improved by 73.9%,and the runtime overhead is reduced by 38.3%.In addition,we also develop a visual multi-core mixed task scheduling simulator RT-MCSS(open source)to facilitate the design and verification of multi-core scheduling for users.As the high performance,VU-WF can be widely used in resource-constrained and safety-critical real-time systems,such as spacecraft,self-driving cars,industrial robots,etc.展开更多
基金This work was partly supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by theKorean government(MSIT)(No.2021-0-02068,Artificial Intelligence Innovation Hub)(No.RS-2022-00155966,Artificial Intelligence Convergence Innovation Human Resources Development(Ewha University)).
文摘AI(Artificial Intelligence)workloads are proliferating in modernreal-time systems.As the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be reexamined.In particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline constraints.To cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two observations.First,resource planning for AI workloadsis a complicated search problem that requires much time for optimization.Second,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in advance.Based on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of resources.Instead of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of tasks.Thus,in any case,the workload isimmediately executed according to the resource plan maintained.Specifically,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload changes.The proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its effectiveness.Simulationexperiments show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.
基金supported by National Natural Science Foundation of China(Grant No.61806138)the Central Government Guides Local Science and Technology Development Funds(Grant No.YDZJSX2021A038)+2 种基金Key RD Program of Shanxi Province(International Cooperation)under Grant No.201903D421048Outstanding Innovation Project for Graduate Students of Taiyuan University of Science and Technology(Project No.XCX211004)China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘Due to the security and scalability features of hybrid cloud architecture,it can bettermeet the diverse requirements of users for cloud services.And a reasonable resource allocation solution is the key to adequately utilize the hybrid cloud.However,most previous studies have not comprehensively optimized the performance of hybrid cloud task scheduling,even ignoring the conflicts between its security privacy features and other requirements.Based on the above problems,a many-objective hybrid cloud task scheduling optimization model(HCTSO)is constructed combining risk rate,resource utilization,total cost,and task completion time.Meanwhile,an opposition-based learning knee point-driven many-objective evolutionary algorithm(OBL-KnEA)is proposed to improve the performance of model solving.The algorithm uses opposition-based learning to generate initial populations for faster convergence.Furthermore,a perturbation-based multipoint crossover operator and a dynamic range mutation operator are designed to extend the search range.By comparing the experiments with other excellent algorithms on HCTSO,OBL-KnEA achieves excellent results in terms of evaluation metrics,initial populations,and model optimization effects.
基金The National Natural Science Foundationof China(No.60873030 )the National High-Tech Research and Development Plan of China(863 Program)(No.2007AA01Z309)
文摘By combining fault-tolerance with power management, this paper developed a new method for aperiodic task set for the problem of task scheduling and voltage allocation in embedded real-time systems. The scbedulability of the system was analyzed through checkpointing and the energy saving was considered via dynamic voltage and frequency scaling. Simulation results showed that the proposed algorithm had better performance compared with the existing voltage allocation techniques. The proposed technique saves 51.5% energy over FT-Only and 19.9% over FT + EC on average. Therefore, the proposed method was more appropriate for aperiodic tasks in embedded real-time systems.
基金supported by the National Natural Science Foundation of China (61104180)the National Basic Research Program of China(973 Program) (97361361)
文摘Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems.Although many scheduling algorithms have been proposed,emergency tasks,characterized as importance and urgency(e.g.,observation tasks orienting to the earthquake area and military conflict area),have not been taken into account yet.Therefore,it is crucial to investigate the satellite integrated scheduling methods,which focus on meeting the requirements of emergency tasks while maximizing the profit of common tasks.Firstly,a pretreatment approach is proposed,which eliminates conflicts among emergency tasks and allocates all tasks with a potential time-window to related orbits of satellites.Secondly,a mathematical model and an acyclic directed graph model are constructed.Thirdly,a hybrid ant colony optimization method mixed with iteration local search(ACO-ILS) is established to solve the problem.Moreover,to guarantee all solutions satisfying the emergency task requirement constraints,a constraint repair method is presented.Extensive experimental simulations show that the proposed integrated scheduling method is superior to two-phased scheduling methods,the performance of ACO-ILS is greatly improved in both evolution speed and solution quality by iteration local search,and ACO-ILS outperforms both genetic algorithm and simulated annealing algorithm.
文摘A reservation-based feedback scheduling (FS-CBS) of a set of model predictive control (MPC) tasks is presented to optimize the global control performance subject to limited computational resource. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The FS-CBS is shown robust against the varying of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
文摘Algorithm research of task scheduling is one of the key techniques in grid computing. This paper firstly describes a DAG task scheduling model used in grid computing environment, secondly discusses generational scheduling (GS) and communication inclusion generational scheduling (CIGS) algorithms. Finally, an improved CIGS algorithm is proposed to use in grid computing environment, and it has been proved effectively.
基金This work was supported by National Science Foundation of China (No. 50405017).
文摘Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FS- CBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
文摘Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given application has to be guaranteed, the recharge rate of obtaining energy inevitably affects the task scheduling. This paper extends our previous works in?[1] [2] to explore the real-time task assignment problem on an energy-harvesting distributed system. The solution using Ant Colony Optimization (ACO) and several significant improvements are presented. Simulations compare the performance of the approaches, which demonstrate the solutions effectiveness and efficiency.
基金supported by the 863 Program under Grant No2007AA01Z131
文摘In the previous work of garbage collection (GC) models, scheduling analysis was given based on an assumption that there were no aperiodic mutator tasks. However, it is not true in practical real-time systems. The GC algorithm which can schedule aperiodic tasks is proposed, and the variance of live memory is analyzed. In this algorithm, active tasks are deferred to be processed by GC until the states of tasks become inactive, and the saved sporadic server time can be used to schedule aperiodic tasks. Scheduling the sample task sets demonstrates that this algorithm in this paper can schedule aperiodic tasks and decrease GC work. Thus, the GC algorithm proposed is more flexible and portable.
基金supported by the National Natural Science Foundation of China(61773120,61473301,71501180,71501179,61603400)。
文摘In the imaging observation system, imaging task scheduling is an important topic. Most scholars study the imaging task scheduling from the perspective of static priority, and only a few from the perspective of dynamic priority. However,the priority of the imaging task is dynamic in actual engineering. To supplement the research on imaging observation, this paper proposes the task priority model, dynamic scheduling strategy and Heuristic algorithm. At first, this paper analyzes the relevant theoretical basis of imaging observation, decomposes the task priority into four parts, including target priority, imaging task priority, track, telemetry & control(TT&C)requirement priority and data transmission requirement priority, summarizes the attribute factors that affect the above four types of priority in detail, and designs the corresponding priority model. Then, this paper takes the emergency tasks scheduling problem as the background, proposes the dynamic scheduling strategy and heuristic algorithm. Finally, the task priority model,dynamic scheduling strategy and heuristic algorithm are verified by experiments.
基金In addition,the authors would like to thank the Deanship of Scientific Research,Prince Sattam bin Abdulaziz University,Al-Kharj,Saudi Arabia,for supporting this work.
文摘Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project(No.PNURSP2023R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Providers(CSPs),the research focuses on minimizing job completion delays through efficient task allocation.Utilizing Johnson’s rule from operations research,the study addresses the challenge of resource availability post-task completion.It advocates for queuing models with multiple servers and finite capacity to improve job scheduling models,subsequently reducing wait times and queue lengths.The Dynamic Johnson Sequencing Algorithm and the M/M/c/K queuing model are applied to optimize task sequences,showcasing their efficacy through comparative analysis.The research evaluates the impact of makespan calculation on data file transfer times and assesses vital performance indicators,ultimately positioning the proposed technique as superior to existing approaches,offering a robust framework for enhanced task scheduling and resource allocation in cloud computing.
文摘Multi-core processor is widely used as the running platform for safety-critical real-time systems such as spacecraft,and various types of real-time tasks are dynamically added at runtime.In order to improve the utilization of multi-core processors and ensure the real-time performance of the system,it is necessary to adopt a reasonable real-time task allocation method,but the existing methods are only for single-core processors or the performance is too low to be applicable.Aiming at the task allocation problem when mixed real-time tasks are dynamically added,we propose a heuristic mixed real-time task allocation algorithm of virtual utilization VU-WF(Virtual Utilization Worst Fit)in multi-core processor.First,a 4-tuple task model is established to describe the fixedpoint task and the sporadic task in a unified manner.Then,a VDS(Virtual Deferral Server)for serving execution requests of fixed-point task is constructed and a schedulability test of the mixed task set is derived.Finally,combined with the analysis of VDS's capacity,VU-WF is proposed,which selects cores in ascending order of virtual utilization for the schedulability test.Experiments show that the overall performance of VU-WF is better than available algorithms,not only has a good schedulable ratio and load balancing but also has the lowest runtime overhead.In a 4-core processor,compared with available algorithms of the same schedulability ratio,the load balancing is improved by 73.9%,and the runtime overhead is reduced by 38.3%.In addition,we also develop a visual multi-core mixed task scheduling simulator RT-MCSS(open source)to facilitate the design and verification of multi-core scheduling for users.As the high performance,VU-WF can be widely used in resource-constrained and safety-critical real-time systems,such as spacecraft,self-driving cars,industrial robots,etc.