Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation pe...Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization p...To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
With increased dependence on space assets,scheduling and tasking of the space surveillance network(SSN)are vitally important.The multi-sensor collaborative observation scheduling(MCOS)problem is a multi-constraint and...With increased dependence on space assets,scheduling and tasking of the space surveillance network(SSN)are vitally important.The multi-sensor collaborative observation scheduling(MCOS)problem is a multi-constraint and high-conflict complex combinatorial optimization problem that is nondeterministic polynomial(NP)-hard.This research establishes a sub-time window constraint satisfaction problem(STWCSP)model with the objective of maximizing observation profit.Considering the significant effect of genetic algorithms(GA)on solving the problem of resource allocation,an evolution heuristic(EH)algorithm containing three strategies that focus on the MCOS problem is proposed.For each case,a task scheduling sequence is first obtained via an improved GA with penalty(GAPE)algorithm,and then a mission planning algorithm(heuristic rule)is used to determine the specific observation time.Compared to the model without sub-time windows and some other algorithms,a series of experiments illustrate the STWCSP model has better performance in terms of total profit.Experiments about strategy and parameter sensitivity validate its excellent performance in terms of EH algorithms.展开更多
With the rapid development of intelligent manufacturing and the changes in market demand,the current manufacturing industry presents the characteristics of multi-varieties,small batches,customization,and a short produ...With the rapid development of intelligent manufacturing and the changes in market demand,the current manufacturing industry presents the characteristics of multi-varieties,small batches,customization,and a short production cycle,with the whole production process having certain flexibility.In this paper,a mathematical model is established with the minimum production cycle as the optimization objective for the dual-resource batch scheduling of the flexible job shop,and an improved nested optimization algorithm is designed to solve the problem.The outer layer batch optimization problem is solved by the improved simulated annealing algorithm.The inner double resource scheduling problem is solved by the improved adaptive genetic algorithm,the double coding scheme,and the decoding scheme of Automated Guided Vehicle(AGV)scheduling based on the scheduling rules.The time consumption of collision-free paths is solved with the path planning algorithm which uses the Dijkstra algorithm based on a time window.Finally,the effectiveness of the algorithm is verified by actual cases,and the influence of AGV with different configurations on workshop production efficiency is analyzed.展开更多
Cloud computing(CC)is developing as a powerful and flexible computational structure for providing ubiquitous service to users.It receives interrelated software and hardware resources in an integrated manner distinct f...Cloud computing(CC)is developing as a powerful and flexible computational structure for providing ubiquitous service to users.It receives interrelated software and hardware resources in an integrated manner distinct from the classical computational environment.The variation of software and hardware resources were combined and composed as a resource pool.The software no more resided in the single hardware environment,it can be executed on the schedule of resource pools to optimize resource consumption.Optimizing energy consumption in CC environments is the question that allows utilizing several energy conservation approaches for effective resource allocation.This study introduces a Battle Royale Optimization-based Resource Scheduling Scheme for Cloud Computing Environment(BRORSS-CCE)technique.The presented BRORSS-CCE technique majorly schedules the available resources for maximum utilization and effectual makespan.In the BRORSS-CCE technique,the BRO is a population-based algorithm where all the individuals are denoted by a soldier/player who likes to go towards the optimal place and ultimate survival.The BRORSS-CCE technique can be employed to balance the load,distribute resources based on demand and assure services to all requests.The experimental validation of the BRORSS-CCE technique is tested under distinct aspects.The experimental outcomes indicated the enhancements of the BRORSS-CCE technique over other models.展开更多
Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CP...Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CPS offers huge processing and storage resources for CPS thatfinds helpful for a range of application areas.At the same time,with the massive development of applica-tions that exist in the CPS environment,the energy utilization of the cloud enabled CPS has gained significant interest.For improving the energy effective-ness of the CC platform,virtualization technologies have been employed for resource management and the applications are executed via virtual machines(VMs).Since effective scheduling of resources acts as an important role in the design of cloud enabled CPS,this paper focuses on the design of chaotic sandpi-per optimization based VM scheduling(CSPO-VMS)technique for energy effi-cient CPS.The CSPO-VMS technique is utilized for searching for the optimum VM migration solution and it helps to choose an effective scheduling strategy.The CSPO algorithm integrates the concepts of traditional SPO algorithm with the chaos theory,which substitutes the main parameter and combines it with the chaos.In order to improve the process of determining the global optimum solutions and convergence rate of the SPO algorithm,the chaotic concept is included in the SPO algorithm.The CSPO-VMS technique also derives afitness function to choose optimal scheduling strategy in the CPS environment.In order to demonstrate the enhanced performance of the CSPO-VMS technique,a wide range of simulations were carried out and the results are examined under varying aspects.The simulation results ensured the improved performance of the CSPO-VMS technique over the recent methods interms of different measures.展开更多
One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consider...One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.展开更多
Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation...Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.展开更多
For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge ser...For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.展开更多
Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources c...Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources can be allocated accordingly and effectively.A three-way decision-based model is proposed for adaptive scheduling of phased radar dwell time.Using the model,the threat posed by a target is measured by an evaluation function,and therefore,a target is assigned to one of the three possible decision regions,i.e.,positive region,negative region,and boundary region.A different region has a various priority in terms of resource demand,and as such,a different radar resource allocation decision is applied to each region to satisfy different tracking accuracies of multi-target.In addition,the dwell time scheduling model can be further optimized by implementing a strategy for determining a proper threshold of three-way decision making to optimize the thresholds adaptively in real-time.The advantages and the performance of the proposed model have been verified by experimental simulations with comparison to the traditional twoway decision model and the three-way decision model without threshold optimization.The experiential results demonstrate that the performance of the proposed model has a certain advantage in detecting high threat targets.展开更多
Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the mod...Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the models cannot reflect the mission synchronization;the targets are treated respectively,which results in the large scale of the problem and high computational complexity.To overcome these disadvantages,a model for UAV resource scheduling under mission synchronization is proposed,which is based on single-objective non-linear integer programming.And several cooperative teams are aggregated for the target clusters from the available resources.The evaluation indices of weapon allocation are referenced in establishing the objective function and the constraints for the issue.The scales of the target clusters are considered as the constraints for the scales of the cooperative teams to make them match in scale.The functions of the intersection between the "mission time-window" and the UAV "arrival time-window" are introduced into the objective function and the constraints in order to describe the mission synchronization effectively.The results demonstrate that the proposed expanded model can meet the requirement of mission synchronization,guide the aggregation of cooperative teams for the target clusters and control the scale of the problem effectively.展开更多
In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in inte...In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in intelligent manufacturing job shop environment was studied. The dual-resource integrated scheduling model of AGV and machine was established by comprehensively considering constraints of machines, workpieces and AGVs. The bidirectional single path fixed guidance system based on topological map was determined, and the AGV transportation task model was defined. The improved A* path optimization algorithm was used to determine the optimal path, and the path conflict elimination mechanism was described. The improved NSGA-Ⅱ algorithm was used to determine the machining workpiece sequence, and the competition mechanism was introduced to allocate AGV transportation tasks. The proposed model and method were verified by a workshop production example, the results showed that the dual resource integrated scheduling strategy of AGV and machine is effective.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide th...Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly. This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same. Experimental results are presented to demonstrate the adaptability of the HGSA.展开更多
Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,...Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.展开更多
Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory...Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.展开更多
When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model an...When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model and an algorithm for scheduling of relief resources are presented. In the model, the players correspond to the multiple emergency locations, strategies correspond to all resources scheduling and the payoff of each emergency location corresponds to the reciprocal of its scheduling cost. Thus, the optimal results are determined by the Nash equilibrium point of this game. Then the iterative algorithm is introduced to seek the Nash equilibrium point. Simulation and analysis are given to demonstrate the feasibility and availability of the model.展开更多
With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong ...With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong scalability and compatibility,Kubernetes has been applied to resource scheduling in IIoT scenarios.However,the limited types of resources,the default scheduling scoring strategy,and the lack of delay control module limit its resource scheduling performance.To address these problems,this paper proposes a multi-resource scheduling(MRS)scheme of Kubernetes for IIoT.The MRS scheme dynamically balances resource utilization by taking both requirements of tasks and the current system state into consideration.Furthermore,the experiments demonstrate the effectiveness of the MRS scheme in terms of delay control and resource utilization.展开更多
基金This research was supported in part by the National Key Research and Development Program of China under Grant 2022YFB3305303in part by the National Natural Science Foundations of China(NSFC)under Grant 62106055+1 种基金in part by the Guangdong Natural Science Foundation under Grant 2022A1515011825in part by the Guangzhou Science and Technology Planning Project under Grants 2023A04J0388 and 2023A03J0662.
文摘Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
文摘To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
基金supported by the National Natural Science Foundation of China(11802333)the Scientific Research Program of the National University of Defense Technology(ZK19-31)。
文摘With increased dependence on space assets,scheduling and tasking of the space surveillance network(SSN)are vitally important.The multi-sensor collaborative observation scheduling(MCOS)problem is a multi-constraint and high-conflict complex combinatorial optimization problem that is nondeterministic polynomial(NP)-hard.This research establishes a sub-time window constraint satisfaction problem(STWCSP)model with the objective of maximizing observation profit.Considering the significant effect of genetic algorithms(GA)on solving the problem of resource allocation,an evolution heuristic(EH)algorithm containing three strategies that focus on the MCOS problem is proposed.For each case,a task scheduling sequence is first obtained via an improved GA with penalty(GAPE)algorithm,and then a mission planning algorithm(heuristic rule)is used to determine the specific observation time.Compared to the model without sub-time windows and some other algorithms,a series of experiments illustrate the STWCSP model has better performance in terms of total profit.Experiments about strategy and parameter sensitivity validate its excellent performance in terms of EH algorithms.
文摘With the rapid development of intelligent manufacturing and the changes in market demand,the current manufacturing industry presents the characteristics of multi-varieties,small batches,customization,and a short production cycle,with the whole production process having certain flexibility.In this paper,a mathematical model is established with the minimum production cycle as the optimization objective for the dual-resource batch scheduling of the flexible job shop,and an improved nested optimization algorithm is designed to solve the problem.The outer layer batch optimization problem is solved by the improved simulated annealing algorithm.The inner double resource scheduling problem is solved by the improved adaptive genetic algorithm,the double coding scheme,and the decoding scheme of Automated Guided Vehicle(AGV)scheduling based on the scheduling rules.The time consumption of collision-free paths is solved with the path planning algorithm which uses the Dijkstra algorithm based on a time window.Finally,the effectiveness of the algorithm is verified by actual cases,and the influence of AGV with different configurations on workshop production efficiency is analyzed.
文摘Cloud computing(CC)is developing as a powerful and flexible computational structure for providing ubiquitous service to users.It receives interrelated software and hardware resources in an integrated manner distinct from the classical computational environment.The variation of software and hardware resources were combined and composed as a resource pool.The software no more resided in the single hardware environment,it can be executed on the schedule of resource pools to optimize resource consumption.Optimizing energy consumption in CC environments is the question that allows utilizing several energy conservation approaches for effective resource allocation.This study introduces a Battle Royale Optimization-based Resource Scheduling Scheme for Cloud Computing Environment(BRORSS-CCE)technique.The presented BRORSS-CCE technique majorly schedules the available resources for maximum utilization and effectual makespan.In the BRORSS-CCE technique,the BRO is a population-based algorithm where all the individuals are denoted by a soldier/player who likes to go towards the optimal place and ultimate survival.The BRORSS-CCE technique can be employed to balance the load,distribute resources based on demand and assure services to all requests.The experimental validation of the BRORSS-CCE technique is tested under distinct aspects.The experimental outcomes indicated the enhancements of the BRORSS-CCE technique over other models.
文摘Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CPS offers huge processing and storage resources for CPS thatfinds helpful for a range of application areas.At the same time,with the massive development of applica-tions that exist in the CPS environment,the energy utilization of the cloud enabled CPS has gained significant interest.For improving the energy effective-ness of the CC platform,virtualization technologies have been employed for resource management and the applications are executed via virtual machines(VMs).Since effective scheduling of resources acts as an important role in the design of cloud enabled CPS,this paper focuses on the design of chaotic sandpi-per optimization based VM scheduling(CSPO-VMS)technique for energy effi-cient CPS.The CSPO-VMS technique is utilized for searching for the optimum VM migration solution and it helps to choose an effective scheduling strategy.The CSPO algorithm integrates the concepts of traditional SPO algorithm with the chaos theory,which substitutes the main parameter and combines it with the chaos.In order to improve the process of determining the global optimum solutions and convergence rate of the SPO algorithm,the chaotic concept is included in the SPO algorithm.The CSPO-VMS technique also derives afitness function to choose optimal scheduling strategy in the CPS environment.In order to demonstrate the enhanced performance of the CSPO-VMS technique,a wide range of simulations were carried out and the results are examined under varying aspects.The simulation results ensured the improved performance of the CSPO-VMS technique over the recent methods interms of different measures.
基金supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry under Grant No.2010-2011 and Chinese Post-doctoral Research Foundation
文摘One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.
基金supported by the National Basic Research Program of China (973 Program) (No. 2012CB821200 (2012CB821206))the National Nature Science Foundation of China (No.61003281, No.91024001 and No.61070142)+1 种基金Beijing Natural Science Foundation (Study on Internet Multi-mode Area Information Accurate Searching and Mining Based on Agent, No.4111002)the Chinese Universities Scientific Fund under Grant No.BUPT 2009RC0201
文摘Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.
基金the National Natural Science Foundation of China(61971066,61941114)the Beijing Natural Science Foundation(No.L182038)National Youth Top-notch Talent Support Program.
文摘For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.
基金the Aeronautical Science Foundation of China(2017ZC53021)the Open Project Fund of CETC Key Laboratory of Data Link Technology(CLDL-20182101).
文摘Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources can be allocated accordingly and effectively.A three-way decision-based model is proposed for adaptive scheduling of phased radar dwell time.Using the model,the threat posed by a target is measured by an evaluation function,and therefore,a target is assigned to one of the three possible decision regions,i.e.,positive region,negative region,and boundary region.A different region has a various priority in terms of resource demand,and as such,a different radar resource allocation decision is applied to each region to satisfy different tracking accuracies of multi-target.In addition,the dwell time scheduling model can be further optimized by implementing a strategy for determining a proper threshold of three-way decision making to optimize the thresholds adaptively in real-time.The advantages and the performance of the proposed model have been verified by experimental simulations with comparison to the traditional twoway decision model and the three-way decision model without threshold optimization.The experiential results demonstrate that the performance of the proposed model has a certain advantage in detecting high threat targets.
文摘Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the models cannot reflect the mission synchronization;the targets are treated respectively,which results in the large scale of the problem and high computational complexity.To overcome these disadvantages,a model for UAV resource scheduling under mission synchronization is proposed,which is based on single-objective non-linear integer programming.And several cooperative teams are aggregated for the target clusters from the available resources.The evaluation indices of weapon allocation are referenced in establishing the objective function and the constraints for the issue.The scales of the target clusters are considered as the constraints for the scales of the cooperative teams to make them match in scale.The functions of the intersection between the "mission time-window" and the UAV "arrival time-window" are introduced into the objective function and the constraints in order to describe the mission synchronization effectively.The results demonstrate that the proposed expanded model can meet the requirement of mission synchronization,guide the aggregation of cooperative teams for the target clusters and control the scale of the problem effectively.
基金Project(BK20201162)supported by the General Program of Natural Science Foundation of Jiangsu Province,ChinaProject(JC2019126)supported by the Science and Technology Plan Fundamental Scientific Research Funding Project of Nantong,China+1 种基金Project(CE20205045)supported by the Changzhou Science and Technology Support Plan(Social Development),ChinaProject(51875171)supported by the National Nature Science Foundation of China。
文摘In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in intelligent manufacturing job shop environment was studied. The dual-resource integrated scheduling model of AGV and machine was established by comprehensively considering constraints of machines, workpieces and AGVs. The bidirectional single path fixed guidance system based on topological map was determined, and the AGV transportation task model was defined. The improved A* path optimization algorithm was used to determine the optimal path, and the path conflict elimination mechanism was described. The improved NSGA-Ⅱ algorithm was used to determine the machining workpiece sequence, and the competition mechanism was introduced to allocate AGV transportation tasks. The proposed model and method were verified by a workshop production example, the results showed that the dual resource integrated scheduling strategy of AGV and machine is effective.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.
基金Project supported by the National Natural Science Foundation of China (No. 60225009), and the National Science Fund for Distin-guished Young Scholars, China
文摘Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly. This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same. Experimental results are presented to demonstrate the adaptability of the HGSA.
基金ACKNOWLEDGEMENTS The authors would like to thank the reviewers for their detailed reviews and constructive comments, which have helped improve the quality of this paper. The research has been partly supported by National Natural Science Foundation of China No. 61272528 and No. 61034005, and the Central University Fund (ID-ZYGX2013J073).
文摘Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.
基金supported in part by National Key Basic Research Program of China (973 program) under Grant No.2011CB302506Important National Science & Technology Specific Projects: Next-Generation Broadband Wireless Mobile Communications Network under Grant No.2011ZX03002-001-01Innovative Research Groups of the National Natural Science Foundation of China under Grant No.60821001
文摘Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.
文摘When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model and an algorithm for scheduling of relief resources are presented. In the model, the players correspond to the multiple emergency locations, strategies correspond to all resources scheduling and the payoff of each emergency location corresponds to the reciprocal of its scheduling cost. Thus, the optimal results are determined by the Nash equilibrium point of this game. Then the iterative algorithm is introduced to seek the Nash equilibrium point. Simulation and analysis are given to demonstrate the feasibility and availability of the model.
基金This work was supported by the National Natural Science Foundation of China(61872423)the Industry Prospective Primary Research&Development Plan of Jiangsu Province(BE2017111)the Scientific Research Foundation of the Higher Education Institutions of Jiangsu Province(19KJA180006).
文摘With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong scalability and compatibility,Kubernetes has been applied to resource scheduling in IIoT scenarios.However,the limited types of resources,the default scheduling scoring strategy,and the lack of delay control module limit its resource scheduling performance.To address these problems,this paper proposes a multi-resource scheduling(MRS)scheme of Kubernetes for IIoT.The MRS scheme dynamically balances resource utilization by taking both requirements of tasks and the current system state into consideration.Furthermore,the experiments demonstrate the effectiveness of the MRS scheme in terms of delay control and resource utilization.