Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning frame...Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning framework for autonomous driving based on a hybrid data-and model-driven method.First,a data-driven decision-making module based on deep reinforcement learning(DRL)is developed to pursue a rational driving performance as much as possible.Then,model predictive control(MPC)is employed to execute both longitudinal and lateral motion planning tasks.Multiple constraints are defined according to the vehicle’s physical limit to meet the driving task requirements.Finally,two principles of safety and rationality for the self-evolution of autonomous driving are proposed.A motion envelope is established and embedded into a rational exploration and exploitation scheme,which filters out unreasonable experiences by masking unsafe actions so as to collect high-quality training data for the DRL agent.Experiments with a high-fidelity vehicle model and MATLAB/Simulink co-simulation environment are conducted,and the results show that the proposed online-evolution framework is able to generate safer,more rational,and more efficient driving action in a real-world environment.展开更多
While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present...While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies.展开更多
Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professio...Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criti...In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.展开更多
As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources ha...As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.展开更多
Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to ob...Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to objectively predict and identify strokes,this paper proposes a new stroke risk assessment decision-making model named Logistic-AdaBoost(Logistic-AB)based on machine learning.First,the categorical boosting(CatBoost)method is used to perform feature selection for all features of stroke,and 8 main features are selected to form a new index evaluation system to predict the risk of stroke.Second,the borderline synthetic minority oversampling technique(SMOTE)algorithm is applied to transform the unbalanced stroke dataset into a balanced dataset.Finally,the stroke risk assessment decision-makingmodel Logistic-AB is constructed,and the overall prediction performance of this new model is evaluated by comparing it with ten other similar models.The comparison results show that the new model proposed in this paper performs better than the two single algorithms(logistic regression and AdaBoost)on the four indicators of recall,precision,F1 score,and accuracy,and the overall performance of the proposed model is better than that of common machine learning algorithms.The Logistic-AB model presented in this paper can more accurately predict patients’stroke risk.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competi...Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.展开更多
The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow S...The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.展开更多
Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the exis...Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.展开更多
Bottleneck stage and reentrance often exist in real-life manufacturing processes;however,the previous research rarely addresses these two processing conditions in a scheduling problem.In this study,a reentrant hybrid ...Bottleneck stage and reentrance often exist in real-life manufacturing processes;however,the previous research rarely addresses these two processing conditions in a scheduling problem.In this study,a reentrant hybrid flow shop scheduling problem(RHFSP)with a bottleneck stage is considered,and an elite-class teaching-learning-based optimization(ETLBO)algorithm is proposed to minimize maximum completion time.To produce high-quality solutions,teachers are divided into formal ones and substitute ones,and multiple classes are formed.The teacher phase is composed of teacher competition and teacher teaching.The learner phase is replaced with a reinforcement search of the elite class.Adaptive adjustment on teachers and classes is established based on class quality,which is determined by the number of elite solutions in class.Numerous experimental results demonstrate the effectiveness of new strategies,and ETLBO has a significant advantage in solving the considered RHFSP.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans...Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans made by the traditional deterministic optimization models infeasible.A data-driven Wasserstein distributionally robust chance-constrained(WDRCC)optimization approach is proposed in this paper to deal with demand uncertainty in crude oil scheduling.First,a new deterministic crude oil scheduling optimization model is developed as the basis of this approach.The Wasserstein distance is then used to build ambiguity sets from historical data to describe the possible realizations of probability distributions of uncertain demands.A cross-validation method is advanced to choose suitable radii for these ambiguity sets.The deterministic model is reformulated as a WDRCC optimization model for crude oil scheduling to guarantee the demand constraints hold with a desired high probability even in the worst situation in ambiguity sets.The proposed WDRCC model is transferred into an equivalent conditional value-at-risk representation and further derived as a mixed-integer nonlinear programming counterpart.Industrial case studies from a real-world refinery are conducted to show the effectiveness of the proposed method.Out-of-sample tests demonstrate that the solution of the WDRCC model is more robust than those of the deterministic model and the chance-constrained model.展开更多
Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is a...Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is also beneficial for three parametric data.By Pythagorean fuzzy sets,the difference is calculated only between two parameters(membership and non-membership).According to human thoughts,fuzzy data can be found in three parameters(membership uncertainty,and non-membership).So,to make a compromise decision,comparing Sq-LDFSs is essential.Existing measures of different fuzzy sets do,however,can have several flaws that can lead to counterintuitive results.For instance,they treat any increase or decrease in the membership degree as the same as the non-membership degree because the uncertainty does not change,even though each parameter has a different implication.In the Sq-LDFSs comparison,this research develops the differentialmeasure(DFM).Themain goal of the DFM is to cover the unfair arguments that come from treating different types of FSs opposing criteria equally.Due to their relative positions in the attribute space and the similarity of their membership and non-membership degrees,two Sq-LDFSs formthis preference connectionwhen the uncertainty remains same in both sets.According to the degree of superiority or inferiority,two Sq-LDFSs are shown as identical,equivalent,superior,or inferior over one another.The suggested DFM’s fundamental characteristics are provided.Based on the newly developed DFM,a unique approach tomultiple criterion group decision-making is offered.Our suggestedmethod verifies the novel way of calculating the expert weights for Sq-LDFSS as in PFSs.Our proposed technique in three parameters is applied to evaluate solid-state drives and choose the optimum photovoltaic cell in two applications by taking uncertainty parameter zero.The method’s applicability and validity shown by the findings are contrasted with those obtained using various other existing approaches.To assess its stability and usefulness,a sensitivity analysis is done.展开更多
This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed...This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed,placed in a sandbox,and then the sandbox is positioned on a BPM formoulding.The complexity of the scheduling problem increases due to the consideration of BPM capacity and sandbox volume.To minimize the makespan,a new cooperated imperialist competitive algorithm(CICA)is introduced.In CICA,the number of empires is not a parameter,and four empires aremaintained throughout the search process.Two types of assimilations are achieved:The strongest and weakest empires cooperate in their assimilation,while the remaining two empires,having a close normalization total cost,combine in their assimilation.A new form of imperialist competition is proposed to prevent insufficient competition,and the unique features of the problem are effectively utilized.Computational experiments are conducted across several instances,and a significant amount of experimental results show that the newstrategies of CICAare effective,indicating promising advantages for the considered BPMscheduling problems.展开更多
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unload...The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unloading efficiency of container terminals. These terminals can also increase the port’s transportation volume while ensuring the quality of cargo loading and unloading, which has become an inevitable trend in the future development of ports. However, the continuous growth of the port’s transportation volume has increased the horizontal transportation pressure on the automated terminal, and the problems of route conflicts and road locks faced by automated guided vehicles (AGV) have become increasingly prominent. Accordingly, this work takes Xiamen Yuanhai automated container terminal as an example. This work focuses on analyzing the interference problem of path conflict in its horizontal transportation AGV scheduling. Results show that path conflict, the most prominent interference factor, will cause AGV scheduling to be unable to execute the original plan. Consequently, the disruption management was used to establish a disturbance recovery model, and the Dijkstra algorithm for combining with time windows is adopted to plan a conflict-free path. Based on the comparison with the rescheduling method, the research obtains that the deviation of the transportation path and the deviation degree of the transportation path under the disruption management method are much lower than those of the rescheduling method. The transportation path deviation degree of the disruption management method is only 5.56%. Meanwhile, the deviation degree of the transportation path under the rescheduling method is 44.44%.展开更多
基金the financial support of the National Key Research and Development Program of China(2020AAA0108100)the Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Shanghai Gaofeng and Gaoyuan Project for University Academic Program Development for funding。
文摘Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning framework for autonomous driving based on a hybrid data-and model-driven method.First,a data-driven decision-making module based on deep reinforcement learning(DRL)is developed to pursue a rational driving performance as much as possible.Then,model predictive control(MPC)is employed to execute both longitudinal and lateral motion planning tasks.Multiple constraints are defined according to the vehicle’s physical limit to meet the driving task requirements.Finally,two principles of safety and rationality for the self-evolution of autonomous driving are proposed.A motion envelope is established and embedded into a rational exploration and exploitation scheme,which filters out unreasonable experiences by masking unsafe actions so as to collect high-quality training data for the DRL agent.Experiments with a high-fidelity vehicle model and MATLAB/Simulink co-simulation environment are conducted,and the results show that the proposed online-evolution framework is able to generate safer,more rational,and more efficient driving action in a real-world environment.
基金supported in part by the Start-Up Grant-Nanyang Assistant Professorship Grant of Nanyang Technological Universitythe Agency for Science,Technology and Research(A*STAR)under Advanced Manufacturing and Engineering(AME)Young Individual Research under Grant(A2084c0156)+2 种基金the MTC Individual Research Grant(M22K2c0079)the ANR-NRF Joint Grant(NRF2021-NRF-ANR003 HM Science)the Ministry of Education(MOE)under the Tier 2 Grant(MOE-T2EP50222-0002)。
文摘While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies.
基金supported by the National Key Research,Development Program of China (2020AAA0103404)the Beijing Nova Program (20220484077)the National Natural Science Foundation of China (62073323)。
文摘Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
基金the Project Program of Science and Technology on Micro-System Laboratory,No.6142804220101.
文摘In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.
文摘As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.
基金supported by the National Natural Science Foundation of China (No.72071150).
文摘Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to objectively predict and identify strokes,this paper proposes a new stroke risk assessment decision-making model named Logistic-AdaBoost(Logistic-AB)based on machine learning.First,the categorical boosting(CatBoost)method is used to perform feature selection for all features of stroke,and 8 main features are selected to form a new index evaluation system to predict the risk of stroke.Second,the borderline synthetic minority oversampling technique(SMOTE)algorithm is applied to transform the unbalanced stroke dataset into a balanced dataset.Finally,the stroke risk assessment decision-makingmodel Logistic-AB is constructed,and the overall prediction performance of this new model is evaluated by comparing it with ten other similar models.The comparison results show that the new model proposed in this paper performs better than the two single algorithms(logistic regression and AdaBoost)on the four indicators of recall,precision,F1 score,and accuracy,and the overall performance of the proposed model is better than that of common machine learning algorithms.The Logistic-AB model presented in this paper can more accurately predict patients’stroke risk.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
基金supported by the NationalNatural Science Foundation of China(No.61972118)the Key R&D Program of Zhejiang Province(No.2023C01028).
文摘Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.
基金partially supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011531)the National Natural Science Foundation of China under Grant 62173356+2 种基金the Science and Technology Development Fund(FDCT),Macao SAR,under Grant 0019/2021/AZhuhai Industry-University-Research Project with Hongkong and Macao under Grant ZH22017002210014PWCthe Key Technologies for Scheduling and Optimization of Complex Distributed Manufacturing Systems(22JR10KA007).
文摘The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.
基金National Natural Science Foundation of China(62073212).
文摘Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.
基金the National Natural Science Foundation of China(Grant Number 61573264).
文摘Bottleneck stage and reentrance often exist in real-life manufacturing processes;however,the previous research rarely addresses these two processing conditions in a scheduling problem.In this study,a reentrant hybrid flow shop scheduling problem(RHFSP)with a bottleneck stage is considered,and an elite-class teaching-learning-based optimization(ETLBO)algorithm is proposed to minimize maximum completion time.To produce high-quality solutions,teachers are divided into formal ones and substitute ones,and multiple classes are formed.The teacher phase is composed of teacher competition and teacher teaching.The learner phase is replaced with a reinforcement search of the elite class.Adaptive adjustment on teachers and classes is established based on class quality,which is determined by the number of elite solutions in class.Numerous experimental results demonstrate the effectiveness of new strategies,and ETLBO has a significant advantage in solving the considered RHFSP.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
基金the supports from National Natural Science Foundation of China(61988101,62073142,22178103)National Natural Science Fund for Distinguished Young Scholars(61925305)International(Regional)Cooperation and Exchange Project(61720106008)。
文摘Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans made by the traditional deterministic optimization models infeasible.A data-driven Wasserstein distributionally robust chance-constrained(WDRCC)optimization approach is proposed in this paper to deal with demand uncertainty in crude oil scheduling.First,a new deterministic crude oil scheduling optimization model is developed as the basis of this approach.The Wasserstein distance is then used to build ambiguity sets from historical data to describe the possible realizations of probability distributions of uncertain demands.A cross-validation method is advanced to choose suitable radii for these ambiguity sets.The deterministic model is reformulated as a WDRCC optimization model for crude oil scheduling to guarantee the demand constraints hold with a desired high probability even in the worst situation in ambiguity sets.The proposed WDRCC model is transferred into an equivalent conditional value-at-risk representation and further derived as a mixed-integer nonlinear programming counterpart.Industrial case studies from a real-world refinery are conducted to show the effectiveness of the proposed method.Out-of-sample tests demonstrate that the solution of the WDRCC model is more robust than those of the deterministic model and the chance-constrained model.
基金the Deanship of Scientific Research at Umm Al-Qura University(Grant Code:22UQU4310396DSR65).
文摘Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is also beneficial for three parametric data.By Pythagorean fuzzy sets,the difference is calculated only between two parameters(membership and non-membership).According to human thoughts,fuzzy data can be found in three parameters(membership uncertainty,and non-membership).So,to make a compromise decision,comparing Sq-LDFSs is essential.Existing measures of different fuzzy sets do,however,can have several flaws that can lead to counterintuitive results.For instance,they treat any increase or decrease in the membership degree as the same as the non-membership degree because the uncertainty does not change,even though each parameter has a different implication.In the Sq-LDFSs comparison,this research develops the differentialmeasure(DFM).Themain goal of the DFM is to cover the unfair arguments that come from treating different types of FSs opposing criteria equally.Due to their relative positions in the attribute space and the similarity of their membership and non-membership degrees,two Sq-LDFSs formthis preference connectionwhen the uncertainty remains same in both sets.According to the degree of superiority or inferiority,two Sq-LDFSs are shown as identical,equivalent,superior,or inferior over one another.The suggested DFM’s fundamental characteristics are provided.Based on the newly developed DFM,a unique approach tomultiple criterion group decision-making is offered.Our suggestedmethod verifies the novel way of calculating the expert weights for Sq-LDFSS as in PFSs.Our proposed technique in three parameters is applied to evaluate solid-state drives and choose the optimum photovoltaic cell in two applications by taking uncertainty parameter zero.The method’s applicability and validity shown by the findings are contrasted with those obtained using various other existing approaches.To assess its stability and usefulness,a sensitivity analysis is done.
基金the National Natural Science Foundation of China(Grant Number 61573264).
文摘This study focuses on the scheduling problem of unrelated parallel batch processing machines(BPM)with release times,a scenario derived from the moulding process in a foundry.In this process,a batch is initially formed,placed in a sandbox,and then the sandbox is positioned on a BPM formoulding.The complexity of the scheduling problem increases due to the consideration of BPM capacity and sandbox volume.To minimize the makespan,a new cooperated imperialist competitive algorithm(CICA)is introduced.In CICA,the number of empires is not a parameter,and four empires aremaintained throughout the search process.Two types of assimilations are achieved:The strongest and weakest empires cooperate in their assimilation,while the remaining two empires,having a close normalization total cost,combine in their assimilation.A new form of imperialist competition is proposed to prevent insufficient competition,and the unique features of the problem are effectively utilized.Computational experiments are conducted across several instances,and a significant amount of experimental results show that the newstrategies of CICAare effective,indicating promising advantages for the considered BPMscheduling problems.
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
文摘The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unloading efficiency of container terminals. These terminals can also increase the port’s transportation volume while ensuring the quality of cargo loading and unloading, which has become an inevitable trend in the future development of ports. However, the continuous growth of the port’s transportation volume has increased the horizontal transportation pressure on the automated terminal, and the problems of route conflicts and road locks faced by automated guided vehicles (AGV) have become increasingly prominent. Accordingly, this work takes Xiamen Yuanhai automated container terminal as an example. This work focuses on analyzing the interference problem of path conflict in its horizontal transportation AGV scheduling. Results show that path conflict, the most prominent interference factor, will cause AGV scheduling to be unable to execute the original plan. Consequently, the disruption management was used to establish a disturbance recovery model, and the Dijkstra algorithm for combining with time windows is adopted to plan a conflict-free path. Based on the comparison with the rescheduling method, the research obtains that the deviation of the transportation path and the deviation degree of the transportation path under the disruption management method are much lower than those of the rescheduling method. The transportation path deviation degree of the disruption management method is only 5.56%. Meanwhile, the deviation degree of the transportation path under the rescheduling method is 44.44%.