Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
This paper addresses the problem of sensor search scheduling in the complicated space environment faced by the low-earth orbit constellation.Several search scheduling methods based on the commonly used information gai...This paper addresses the problem of sensor search scheduling in the complicated space environment faced by the low-earth orbit constellation.Several search scheduling methods based on the commonly used information gain are compared via simulations first.Then a novel search scheduling method in the scenarios of uncertainty observation is proposed based on the global Shannon information gain and beta density based uncertainty model.Simulation results indicate that the beta density model serves a good option for solving the problem of target acquisition in the complicated space environments.展开更多
Reduction of conservatism is one of the key and difficult problems in missile robust gain scheduling autopilot design based on multipliers.This article presents a scheme of adopting linear parameter-varying(LPV) con...Reduction of conservatism is one of the key and difficult problems in missile robust gain scheduling autopilot design based on multipliers.This article presents a scheme of adopting linear parameter-varying(LPV) control approach with full block multipliers to design a missile robust gain scheduling autopilot in order to eliminate conservatism.A model matching design structure with a high demand on matching precision is constructed based on the missile linear fractional transformation(LFT) model.By applying full block S-procedure and elimination lemma,a convex feasibility problem with an infinite number of constraints is formulated to satisfy robust quadratic performance specifications.Then a grid method is adopted to transform the infinite-dimensional convex feasibility problem into a solvable finite-dimensional convex feasibility problem,based on which a gain scheduling controller with linear fractional dependence on the flight Mach number and altitude is derived.Static and dynamic simulation results show the effectiveness and feasibility of the proposed scheme.展开更多
Consider the design and implementation of an electro-hydraulic control system for a robotic excavator, namely the Lancaster University computerized and intelligent excavator (LUCIE). The excavator was developed to aut...Consider the design and implementation of an electro-hydraulic control system for a robotic excavator, namely the Lancaster University computerized and intelligent excavator (LUCIE). The excavator was developed to autonomously dig trenches without human intervention. One stumbling block is the achievement of adequate, accurate, quick and smooth movement under automatic control, which is difficult for traditional control algorithm, e.g. PI/PID. A gain scheduling design, based on the true digital proportional-integral-plus (PIP) control methodology, was utilized to regulate the nonlinear joint dynamics. Simulation and initial field tests both demonstrated the feasibility and robustness of proposed technique to the uncertainties of parameters, time delay and load disturbances, with the excavator arm directed along specified trajectories in a smooth, fast and accurate manner. The tracking error magnitudes for oblique straight line and horizontal straight line are less than 20 mm and 50 mm, respectively, while the velocity reaches 9 m/min.展开更多
A real-time dwell scheduling model, which takes the time and energy constraints into account is founded from the viewpoint of scheduling gain. Scheduling design is turned into a nonlinear programming procedure. The re...A real-time dwell scheduling model, which takes the time and energy constraints into account is founded from the viewpoint of scheduling gain. Scheduling design is turned into a nonlinear programming procedure. The real-time dwell scheduling algorithm based on the scheduling gain is presented with the help of two heuristic rules. The simulation results demonstrate that compared with the conventional adaptive scheduling method, the algorithm proposed not only increases the scheduling gain and the time utility but also decreases the task drop rate.展开更多
The equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analy...The equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analyzes the physical significance of the equilibrium manifold linearization model, and the self-feedback mechanism of shock motion is revealed. This helps to describe the stability and dynamics of shock motion. Based on the model, the paper puts forwards a gain scheduling control method for nonlinear shock motion. Simulation has shown the validity of the control scheme.展开更多
Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and s...Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
Gain-scheduling has got its wide applications in modern flight control, in which control gains are scheduled with variables such as dynamic pressure and Mach number, to meet dynamic response requirements in different ...Gain-scheduling has got its wide applications in modern flight control, in which control gains are scheduled with variables such as dynamic pressure and Mach number, to meet dynamic response requirements in different flight conditions. Classical gain-scheduling approaches may result in some problems, which can not guarantee global robustness and stability in transitions of different flight conditions. Gain-scheduling problem is systematically investigated from robustness point of view in the paper. Detailed procedures for gain-scheduled controller to achieve both robustness and stability performance are given and applied to a typical flight control system. For switching stability problems of different flight conditions in flight control systems, a new approach is proposed, in which different flight conditions are reduced into a parameter varying plant using interpolation firstly, and then parameter-varying controller design goes next. Though interpolation errors may exist, the robust parameter varying controller design can compensate for those uncertainties and errors, and finally achieve good performance of robustness and switching stability during transitions. Illustrative simulation at last shows satisfactory results.展开更多
In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criti...In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.展开更多
The arm driven inverted pendulum system is a highly nonlinear model, muhivariable and absolutely unstable dynamic system so it is very difficult to obtain exact mathematical model and balance the inverted pendulum wit...The arm driven inverted pendulum system is a highly nonlinear model, muhivariable and absolutely unstable dynamic system so it is very difficult to obtain exact mathematical model and balance the inverted pendulum with variable position of the ann. To solve this problem, this paper presents a mathematical model for arm driven inverted pendulum in mid-position configuration and an adaptive gain scheduling linear quadratic regulator control method for the stabilizing the inverted pendulum. The proposed controllers for arm driven inverted pendulum are simulated using MATLAB-SIMULINK and implemented on an experiment system using PIC 18F4431 mieroeontroller. The result of experiment system shows the control performance to be very good in a wide range stabilization of the arm position.展开更多
As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources ha...As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competi...Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.展开更多
The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow S...The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.展开更多
Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the exis...Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
基金supported by the National Defense Pre-research Foundation (9140A21041110KG0148)
文摘This paper addresses the problem of sensor search scheduling in the complicated space environment faced by the low-earth orbit constellation.Several search scheduling methods based on the commonly used information gain are compared via simulations first.Then a novel search scheduling method in the scenarios of uncertainty observation is proposed based on the global Shannon information gain and beta density based uncertainty model.Simulation results indicate that the beta density model serves a good option for solving the problem of target acquisition in the complicated space environments.
文摘Reduction of conservatism is one of the key and difficult problems in missile robust gain scheduling autopilot design based on multipliers.This article presents a scheme of adopting linear parameter-varying(LPV) control approach with full block multipliers to design a missile robust gain scheduling autopilot in order to eliminate conservatism.A model matching design structure with a high demand on matching precision is constructed based on the missile linear fractional transformation(LFT) model.By applying full block S-procedure and elimination lemma,a convex feasibility problem with an infinite number of constraints is formulated to satisfy robust quadratic performance specifications.Then a grid method is adopted to transform the infinite-dimensional convex feasibility problem into a solvable finite-dimensional convex feasibility problem,based on which a gain scheduling controller with linear fractional dependence on the flight Mach number and altitude is derived.Static and dynamic simulation results show the effectiveness and feasibility of the proposed scheme.
基金Project(K5117827)supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsProject(08KJB510021)supported by the Natural Science Research Council of Jiangsu Province,China+1 种基金Project(Q3117918)supported by Scientific Research Foundation for Young Teachers of Soochow University,ChinaProject(60910001)supported by National Natural Science Foundation of China
文摘Consider the design and implementation of an electro-hydraulic control system for a robotic excavator, namely the Lancaster University computerized and intelligent excavator (LUCIE). The excavator was developed to autonomously dig trenches without human intervention. One stumbling block is the achievement of adequate, accurate, quick and smooth movement under automatic control, which is difficult for traditional control algorithm, e.g. PI/PID. A gain scheduling design, based on the true digital proportional-integral-plus (PIP) control methodology, was utilized to regulate the nonlinear joint dynamics. Simulation and initial field tests both demonstrated the feasibility and robustness of proposed technique to the uncertainties of parameters, time delay and load disturbances, with the excavator arm directed along specified trajectories in a smooth, fast and accurate manner. The tracking error magnitudes for oblique straight line and horizontal straight line are less than 20 mm and 50 mm, respectively, while the velocity reaches 9 m/min.
文摘A real-time dwell scheduling model, which takes the time and energy constraints into account is founded from the viewpoint of scheduling gain. Scheduling design is turned into a nonlinear programming procedure. The real-time dwell scheduling algorithm based on the scheduling gain is presented with the help of two heuristic rules. The simulation results demonstrate that compared with the conventional adaptive scheduling method, the algorithm proposed not only increases the scheduling gain and the time utility but also decreases the task drop rate.
基金Hie-Tch Research and Development Program of China (2002AA723011)
文摘The equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analyzes the physical significance of the equilibrium manifold linearization model, and the self-feedback mechanism of shock motion is revealed. This helps to describe the stability and dynamics of shock motion. Based on the model, the paper puts forwards a gain scheduling control method for nonlinear shock motion. Simulation has shown the validity of the control scheme.
基金supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province(224000510002)。
文摘Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
文摘Gain-scheduling has got its wide applications in modern flight control, in which control gains are scheduled with variables such as dynamic pressure and Mach number, to meet dynamic response requirements in different flight conditions. Classical gain-scheduling approaches may result in some problems, which can not guarantee global robustness and stability in transitions of different flight conditions. Gain-scheduling problem is systematically investigated from robustness point of view in the paper. Detailed procedures for gain-scheduled controller to achieve both robustness and stability performance are given and applied to a typical flight control system. For switching stability problems of different flight conditions in flight control systems, a new approach is proposed, in which different flight conditions are reduced into a parameter varying plant using interpolation firstly, and then parameter-varying controller design goes next. Though interpolation errors may exist, the robust parameter varying controller design can compensate for those uncertainties and errors, and finally achieve good performance of robustness and switching stability during transitions. Illustrative simulation at last shows satisfactory results.
基金the Project Program of Science and Technology on Micro-System Laboratory,No.6142804220101.
文摘In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.
文摘The arm driven inverted pendulum system is a highly nonlinear model, muhivariable and absolutely unstable dynamic system so it is very difficult to obtain exact mathematical model and balance the inverted pendulum with variable position of the ann. To solve this problem, this paper presents a mathematical model for arm driven inverted pendulum in mid-position configuration and an adaptive gain scheduling linear quadratic regulator control method for the stabilizing the inverted pendulum. The proposed controllers for arm driven inverted pendulum are simulated using MATLAB-SIMULINK and implemented on an experiment system using PIC 18F4431 mieroeontroller. The result of experiment system shows the control performance to be very good in a wide range stabilization of the arm position.
文摘As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
基金supported by the NationalNatural Science Foundation of China(No.61972118)the Key R&D Program of Zhejiang Province(No.2023C01028).
文摘Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.
基金partially supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011531)the National Natural Science Foundation of China under Grant 62173356+2 种基金the Science and Technology Development Fund(FDCT),Macao SAR,under Grant 0019/2021/AZhuhai Industry-University-Research Project with Hongkong and Macao under Grant ZH22017002210014PWCthe Key Technologies for Scheduling and Optimization of Complex Distributed Manufacturing Systems(22JR10KA007).
文摘The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.
基金National Natural Science Foundation of China(62073212).
文摘Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.