Responsive orbits have exhibited advantages in emergencies for their excellent responsiveness and coverage to targets.Generally,there are several conflicting metrics to trade in the orbit design for responsive space.A...Responsive orbits have exhibited advantages in emergencies for their excellent responsiveness and coverage to targets.Generally,there are several conflicting metrics to trade in the orbit design for responsive space.A special multiple-objective genetic algorithm,namely the Nondominated Sorting Genetic AlgorithmⅡ(NSGAⅡ),is used to design responsive orbits.This algorithm has considered the conflicting metrics of orbits to achieve the optimal solution,including the orbital elements and launch programs of responsive vehicles.Low-Earth fast access orbits and low-Earth repeat coverage orbits,two subtypes of responsive orbits,can be designed using NSGAI under given metric tradeoffs,number of vehicles,and launch mode.By selecting the optimal solution from the obtained Pareto fronts,a designer can process the metric tradeoffs conveniently in orbit design.Recurring to the flexibility of the algorithm,the NSGAI promotes the responsive orbit design further.展开更多
High energy sub-nuclear interactions are a good tool to dive deeply in the core of the particles to recognize their structures and the forces governed. The current article focuses on using one of the evolutionary comp...High energy sub-nuclear interactions are a good tool to dive deeply in the core of the particles to recognize their structures and the forces governed. The current article focuses on using one of the evolutionary computation techniques, the so-called genetic programming (GP), to model the hadron nucleus (h-A) interactions through discovering functions. In this article, GP is used to simulate the rapidity distribution of total charged, positive and negative pions for p<sup>-</sup>-Ar and p<sup>-</sup>-Xe interactions at 200 GeV/c and charged particles for p-pb collision at 5.02 TeV. We have done so many runs to select the best runs of the GP program and finally obtained the rapidity distribution as a function of the lab momentum , mass number (A) and the number of particles per unit solid angle (Y). In all cases studied, we compared our seven discovered functions produced by GP technique with the corresponding experimental data and the excellent matching was so clear.展开更多
Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human ...Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human intervention.Evolutionary algorithms(EAs)for NAS can find better solutions than human-designed architectures by exploring a large search space for possible architectures.Using multiobjective EAs for NAS,optimal neural architectures that meet various performance criteria can be explored and discovered efficiently.Furthermore,hardware-accelerated NAS methods can improve the efficiency of the NAS.While existing reviews have mainly focused on different strategies to complete NAS,a few studies have explored the use of EAs for NAS.In this paper,we summarize and explore the use of EAs for NAS,as well as large-scale multiobjective optimization strategies and hardware-accelerated NAS methods.NAS performs well in healthcare applications,such as medical image analysis,classification of disease diagnosis,and health monitoring.EAs for NAS can automate the search process and optimize multiple objectives simultaneously in a given healthcare task.Deep neural network has been successfully used in healthcare,but it lacks interpretability.Medical data is highly sensitive,and privacy leaks are frequently reported in the healthcare industry.To solve these problems,in healthcare,we propose an interpretable neuroevolution framework based on federated learning to address search efficiency and privacy protection.Moreover,we also point out future research directions for evolutionary NAS.Overall,for researchers who want to use EAs to optimize NNs in healthcare,we analyze the advantages and disadvantages of doing so to provide detailed guidance,and propose an interpretable privacy-preserving framework for healthcare applications.展开更多
Expensive optimization problem(EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for t...Expensive optimization problem(EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation(EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently.展开更多
Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged a...Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.展开更多
Evolutionary Computation(EC)has strengths in terms of computation for gait optimization.However,conventional evolutionary algorithms use typical gait parameters such as step length and swing height,which limit the tra...Evolutionary Computation(EC)has strengths in terms of computation for gait optimization.However,conventional evolutionary algorithms use typical gait parameters such as step length and swing height,which limit the trajectory deformation for optimization of the foot trajectory.Furthermore,the quantitative index of fitness convergence is insufficient.In this paper,we perform gait optimization of a quadruped robot using foot placement perturbation based on EC.The proposed algorithm has an atypical solution search range,which is generated by independent manipulation of each placement that forms the foot trajectory.A convergence index is also introduced to prevent premature cessation of learning.The conventional algorithm and the proposed algorithm are applied to a quadruped robot;walking performances are then compared by gait simulation.Although the two algorithms exhibit similar computation rates,the proposed algorithm shows better fitness and a wider search range.The evolutionary tendency of the walking trajectory is analyzed using the optimized results,and the findings provide insight into reliable leg trajectory design.展开更多
Purpose–The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning.Generally,by defining a proper penalty function,regularization laws are embe...Purpose–The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning.Generally,by defining a proper penalty function,regularization laws are embedded into the structure of common least square solutions to increase the numerical stability,sparsity,accuracy and robustness of regression weights.Several regularization techniques have been proposed so far which have their own advantages and disadvantages.Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques.However,the proposed numerical and deterministic approaches need certain knowledge of mathematical programming,and also do not guarantee the global optimality of the obtained solution.In this research,the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine(ELM).Design/methodology/approach–To implement the required tools for comparative numerical study,three steps are taken.The considered algorithms contain both classical and swarm and evolutionary approaches.For the classical regularization techniques,Lasso regularization,Tikhonov regularization,cascade Lasso-Tikhonov regularization,and elastic net are considered.For swarm and evolutionary-based regularization,an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered,and its algorithmic structure is modified so that it can efficiently perform the regularized learning.Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme.To test the efficacy of the proposed constraint evolutionary-based regularization technique,a wide range of regression problems are used.Besides,the proposed framework is applied to a real-life identification problem,i.e.identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine,for further assurance on the performance of the proposed scheme.Findings–Through extensive numerical study,it is observed that the proposed scheme can be easily used for regularized machine learning.It is indicated that by defining a proper objective function and considering an appropriate penalty function,near global optimum values of regressors can be easily obtained.The results attest the high potentials of swarm and evolutionary techniques for fast,accurate and robust regularized machine learning.Originality/value–The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine(OP-ELM).The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system,and also increases the degree of the automation of OP-ELM.Besides,by using different types of metaheuristics,it is demonstrated that the proposed methodology is a general flexible scheme,and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.展开更多
During the last three decades,evolutionary algorithms(EAs)have shown superiority in solving complex optimization problems,especially those with multiple objectives and non-differentiable landscapes.However,due to the ...During the last three decades,evolutionary algorithms(EAs)have shown superiority in solving complex optimization problems,especially those with multiple objectives and non-differentiable landscapes.However,due to the stochastic search strategies,the performance of most EAs deteriorates drastically when handling a large number of decision variables.To tackle the curse of dimensionality,this work proposes an efficient EA for solving super-large-scale multi-objective optimization problems with sparse optimal solutions.The proposed algorithm estimates the sparse distribution of optimal solutions by optimizing a binary vector for each solution,and provides a fast clustering method to highly reduce the dimensionality of the search space.More importantly,all the operations related to the decision variables only contain several matrix calculations,which can be directly accelerated by GPUs.While existing EAs are capable of handling fewer than 10000 real variables,the proposed algorithm is verified to be effective in handling 1000000 real variables.Furthermore,since the proposed algorithm handles the large number of variables via accelerated matrix calculations,its runtime can be reduced to less than 10%of the runtime of existing EAs.展开更多
Social propagation denotes the spread phenomena directly correlated to the human world and society, which includes but is not limited to the diffusion of human epidemics, human-made malicious viruses, fake news, socia...Social propagation denotes the spread phenomena directly correlated to the human world and society, which includes but is not limited to the diffusion of human epidemics, human-made malicious viruses, fake news, social innovation, viral marketing, etc. Simulation and optimization are two major themes in social propagation, where network-based simulation helps to analyze and understand the social contagion, and problem-oriented optimization is devoted to contain or improve the infection results. Though there have been many models and optimization techniques, the matter of concern is that the increasing complexity and scales of propagation processes continuously refresh the former conclusions. Recently, evolutionary computation(EC) shows its potential in alleviating the concerns by introducing an evolving and developing perspective. With this insight, this paper intends to develop a comprehensive view of how EC takes effect in social propagation. Taxonomy is provided for classifying the propagation problems, and the applications of EC in solving these problems are reviewed. Furthermore, some open issues of social propagation and the potential applications of EC are discussed.This paper contributes to recognizing the problems in application-oriented EC design and paves the way for the development of evolving propagation dynamics.展开更多
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a...Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.展开更多
Field penetration index(FPI) is one of the representative key parameters to examine the tunnel boring machine(TBM) performance.Lack of accurate FPI prediction can be responsible for numerous disastrous incidents assoc...Field penetration index(FPI) is one of the representative key parameters to examine the tunnel boring machine(TBM) performance.Lack of accurate FPI prediction can be responsible for numerous disastrous incidents associated with rock mechanics and engineering.This study aims to predict TBM performance(i.e.FPI) by an efficient and improved adaptive neuro-fuzzy inference system(ANFIS) model.This was done using an evolutionary algorithm,i.e.artificial bee colony(ABC) algorithm mixed with the ANFIS model.The role of ABC algorithm in this system is to find the optimum membership functions(MFs) of ANFIS model to achieve a higher degree of accuracy.The procedure and modeling were conducted on a tunnelling database comprising of more than 150 data samples where brittleness index(BI),fracture spacing,α angle between the plane of weakness and the TBM driven direction,and field single cutter load were assigned as model inputs to approximate FPI values.According to the results obtained by performance indices,the proposed ANFISABC model was able to receive the highest accuracy level in predicting FPI values compared with ANFIS model.In terms of coefficient of determination(R^(2)),the values of 0.951 and 0.901 were obtained for training and testing stages of the proposed ANFISABC model,respectively,which confirm its power and capability in solving TBM performance problem.The proposed model can be used in the other areas of rock mechanics and underground space technologies with similar conditions.展开更多
Wind energy has been widely applied in power generation to alleviate climate problems.The wind turbine layout of a wind farm is a primary factor of impacting power conversion efficiency due to the wake effect that red...Wind energy has been widely applied in power generation to alleviate climate problems.The wind turbine layout of a wind farm is a primary factor of impacting power conversion efficiency due to the wake effect that reduces the power outputs of wind turbines located in downstream.Wind farm layout optimization(WFLO)aims to reduce the wake effect for maximizing the power outputs of the wind farm.Nevertheless,the wake effect among wind turbines increases significantly as the number of wind turbines increases in the wind farm,which severely affect power conversion efficiency.Conventional heuristic algorithms suffer from issues of low solution quality and local optimum for large-scale WFLO under complex wind scenarios.Thus,a chaotic local search-based genetic learning particle swarm optimizer(CGPSO)is proposed to optimize large-scale WFLO problems.CGPSO is tested on four larger-scale wind farms under four complex wind scenarios and compares with eight state-of-the-art algorithms.The experiment results indicate that CGPSO significantly outperforms its competitors in terms of performance,stability,and robustness.To be specific,a success and failure memories-based selection is proposed to choose a chaotic map for chaotic search local.It improves the solution quality.The parameter and search pattern of chaotic local search are also analyzed for WFLO problems.展开更多
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ...Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.展开更多
It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practit...It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practitioner may use the results of association rule mining performed on this aggregated data to better personalize patient care and implement preventive measures.Historically,numerous heuristics(e.g.,greedy search)and metaheuristics-based techniques(e.g.,evolutionary algorithm)have been created for the positive association rule in privacy preserving data mining(PPDM).When it comes to connecting seemingly unrelated diseases and drugs,negative association rules may be more informative than their positive counterparts.It is well-known that during negative association rules mining,a large number of uninteresting rules are formed,making this a difficult problem to tackle.In this research,we offer an adaptive method for negative association rule mining in vertically partitioned healthcare datasets that respects users’privacy.The applied approach dynamically determines the transactions to be interrupted for information hiding,as opposed to predefining them.This study introduces a novel method for addressing the problem of negative association rules in healthcare data mining,one that is based on the Tabu-genetic optimization paradigm.Tabu search is advantageous since it removes a huge number of unnecessary rules and item sets.Experiments using benchmark healthcare datasets prove that the discussed scheme outperforms state-of-the-art solutions in terms of decreasing side effects and data distortions,as measured by the indicator of hiding failure.展开更多
Selective logging is well-recognized as an effective practice in sustainable forest management.However,the ecological efficiency or resilience of the residual stand is often in doubt.Recovery time depends on operation...Selective logging is well-recognized as an effective practice in sustainable forest management.However,the ecological efficiency or resilience of the residual stand is often in doubt.Recovery time depends on operational variables,diversity,and forest structure.Selective logging is excellent but is open to changes.This may be resolved by mathematical programming and this study integrates the economic-ecological aspects in multi-objective function by applying two evolutionary algorithms.The function maximizes remaining stand diversity,merchantable logs,and the inverse of distance between trees for harvesting and log landings points.The Brazilian rainforest database(566 trees)was used to simulate our 216-ha model.The log landing design has a maximum volume limit of 500 m3.The nondominated sorting genetic algorithm was applied to solve the main optimization problem.In parallel,a sub-problem(p-facility allocation)was solved for landing allocation by a genetic algorithm.Pareto frontier analysis was applied to distinguish the gradientsα-economic,β-ecological,andγ-equilibrium.As expected,the solutions have high diameter changes in the residual stand(average removal of approximately 16 m^(3) ha^(-1)).All solutions showed a grouping of trees selected for harvesting,although there was no formation of large clearings(percentage of canopy removal<7%,with an average of 2.5 ind ha^(-1)).There were no differences in floristic composition by preferentially selecting species with greater frequency in the initial stand for harvesting.This implies a lower impact on the demographic rates of the remaining stand.The methodology should support projects of reduced impact logging by using spatial-diversity information to guide better practices in tropical forests.展开更多
The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more...The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more efficient,it is essential to assess the reusability of the components used.This paper proposes a software reusability prediction model named Flexible Random Fit(FRF)based on aging resilience for a Service Net(SN)software system.The reusability predic-tion model is developed based on a multilevel optimization technique based on software characteristics such as cohesion,coupling,and complexity.Metrics are obtained from the SN software system,which is then subjected to min-max nor-malization to avoid any saturation during the learning process.The feature extrac-tion process is made more feasible by enriching the data quality via outlier detection.The reusability of the classes is estimated based on a tool called Soft Audit.Software reusability can be predicted more effectively based on the pro-posed FRF-ANN(Flexible Random Fit-Artificial Neural Network)algorithm.Performance evaluation shows that the proposed algorithm outperforms all the other techniques,thus ensuring the optimization of software reusability based on aging resilient.The model is then tested using constraint-based testing techni-ques to make sure that it is perfect at optimizing and making predictions.展开更多
Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a cha...Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a challenging task for farmers in earlier stage of crop growth because of similarity.To address this issue,an efficient weed classification model is proposed with the Deep Convolutional Neural Network(CNN)that implements automatic feature extraction and performs complex feature learning for image classification.Throughout this work,weed images were trained using the proposed CNN model with evolutionary computing approach to classify the weeds based on the two publicly available weed datasets.The Tamil Nadu Agricultural University(TNAU)dataset used as afirst dataset that consists of 40 classes of weed images and the other dataset is from Indian Council of Agriculture Research–Directorate of Weed Research(ICAR-DWR)which contains 50 classes of weed images.An effective Particle Swarm Optimization(PSO)technique is applied in the proposed CNN to automa-tically evolve and improve its classification accuracy.The proposed model was evaluated and compared with pre-trained transfer learning models such as GoogLeNet,AlexNet,Residual neural Network(ResNet)and Visual Geometry Group Network(VGGNet)for weed classification.This work shows that the performance of the PSO assisted proposed CNN model is significantly improved the success rate by 98.58%for TNAU and 97.79%for ICAR-DWR weed datasets.展开更多
Manufacturing service composition of the supply side and scheduling of the demand side are two important components of Cloud Manufacturing,which directly affect the quality of Cloud Manufacturing services.However,the ...Manufacturing service composition of the supply side and scheduling of the demand side are two important components of Cloud Manufacturing,which directly affect the quality of Cloud Manufacturing services.However,the previous studies on the two components are carried out independently and thus ignoring the internal relations and mutual constraints.Considering the two components on both sides of the supply and the demand of Cloud Manufacturing services at the same time,a Bilateral Collaborative Optimization Model of Cloud Manufacturing(BCOM-CMfg)is constructed in this paper.In BCOM-CMfg,to solve the manufacturing service scheduling problem on the supply side,a new efficient manufacturing service scheduling strategy is proposed.Then,as the input of the service composition problem on the demand side,the scheduling strategy is used to build the BCOM-CMfg.Furthermore,the Cooperation Level(CPL)between services is added as an evaluation index in BCOM-CMfg,which reveals the importance of the relationship between services.To improve the quality of manufacturing services more comprehensively.Finally,a Self-adaptive Multi-objective Pigeon-inspired Optimization algorithm(S-MOPIO)is proposed to solve the BCOM-CMfg.Simulation results show that the BCOM-CMfg model has advantages in reliability and cost and S-MOPIO can solve BCOM-CMfg effectively.展开更多
Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs...Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs to each search subspace may be wasteful when computational resources are limited,especially on imbalanced problems.To alleviate the above-mentioned issue,a zoning search with adaptive resource allocating(ZS-ARA)method is proposed in the current study.In the proposed ZS-ARA,the entire search space is divided into many subspaces to preserve the diversity in the decision space and to reduce the problem complexity.Moreover,the computational resources can be automatically allocated among all the subspaces.The ZS-ARA is compared with seven algorithms on two different types of multimodal multi-objective problems(MMOPs),namely,balanced and imbalanced MMOPs.The results indicate that,similarly to the ZS,the ZS-ARA achieves high performance with the balanced MMOPs.Also,it can greatly assist a“regular”algorithm in improving its performance on the imbalanced MMOPs,and is capable of allocating the limited computational resources dynamically.展开更多
Some species of females,e.g.,chicken,bird,fish etc.,might mate with more than one males.In the mating of these polygamous creatures,there is competition between males as well as among their offspring.Thus,male reprodu...Some species of females,e.g.,chicken,bird,fish etc.,might mate with more than one males.In the mating of these polygamous creatures,there is competition between males as well as among their offspring.Thus,male reproductive success depends on both male competition and sperm rivalry.Inspired by this type of sexual life of roosters with chickens,a novel nature-inspired optimization algorithm called Roosters Algorithm(RA)is proposed.The algorithm was modelled and implemented based on the sexual behavior of roosters.13 well-known benchmark optimization functions and 10 IEEE CEC 2018 test functions are utilized to compare the performance of RA with the performance of well-known algorithms;Standard Genetic Algorithm(SGA),Differential Evolution(DE),Particle Swarm Optimization(PSO),Cuckoo Search(CS)and Grey Wolf Optimizer(GWO).Also,non-parametric statistical tests,Friedman and Wilcoxon Signed Rank Tests,were performed to demonstrate the significance of the results.In 20 of the 23 functions that were tested,RA either offered the best results or offered similar results to other compared algorithms.Thus,in this paper,we not only present a novel nature-inspired algorithm,but also offer an alternative method to the well-known algorithms commonly used in the literature,at least as effective as them.展开更多
文摘Responsive orbits have exhibited advantages in emergencies for their excellent responsiveness and coverage to targets.Generally,there are several conflicting metrics to trade in the orbit design for responsive space.A special multiple-objective genetic algorithm,namely the Nondominated Sorting Genetic AlgorithmⅡ(NSGAⅡ),is used to design responsive orbits.This algorithm has considered the conflicting metrics of orbits to achieve the optimal solution,including the orbital elements and launch programs of responsive vehicles.Low-Earth fast access orbits and low-Earth repeat coverage orbits,two subtypes of responsive orbits,can be designed using NSGAI under given metric tradeoffs,number of vehicles,and launch mode.By selecting the optimal solution from the obtained Pareto fronts,a designer can process the metric tradeoffs conveniently in orbit design.Recurring to the flexibility of the algorithm,the NSGAI promotes the responsive orbit design further.
文摘High energy sub-nuclear interactions are a good tool to dive deeply in the core of the particles to recognize their structures and the forces governed. The current article focuses on using one of the evolutionary computation techniques, the so-called genetic programming (GP), to model the hadron nucleus (h-A) interactions through discovering functions. In this article, GP is used to simulate the rapidity distribution of total charged, positive and negative pions for p<sup>-</sup>-Ar and p<sup>-</sup>-Xe interactions at 200 GeV/c and charged particles for p-pb collision at 5.02 TeV. We have done so many runs to select the best runs of the GP program and finally obtained the rapidity distribution as a function of the lab momentum , mass number (A) and the number of particles per unit solid angle (Y). In all cases studied, we compared our seven discovered functions produced by GP technique with the corresponding experimental data and the excellent matching was so clear.
基金supported in part by the National Natural Science Foundation of China (NSFC) under Grant No.61976242in part by the Natural Science Fund of Hebei Province for Distinguished Young Scholars under Grant No.F2021202010+2 种基金in part by the Fundamental Scientific Research Funds for Interdisciplinary Team of Hebei University of Technology under Grant No.JBKYTD2002funded by Science and Technology Project of Hebei Education Department under Grant No.JZX2023007supported by 2022 Interdisciplinary Postgraduate Training Program of Hebei University of Technology under Grant No.HEBUT-YXKJC-2022122.
文摘Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human intervention.Evolutionary algorithms(EAs)for NAS can find better solutions than human-designed architectures by exploring a large search space for possible architectures.Using multiobjective EAs for NAS,optimal neural architectures that meet various performance criteria can be explored and discovered efficiently.Furthermore,hardware-accelerated NAS methods can improve the efficiency of the NAS.While existing reviews have mainly focused on different strategies to complete NAS,a few studies have explored the use of EAs for NAS.In this paper,we summarize and explore the use of EAs for NAS,as well as large-scale multiobjective optimization strategies and hardware-accelerated NAS methods.NAS performs well in healthcare applications,such as medical image analysis,classification of disease diagnosis,and health monitoring.EAs for NAS can automate the search process and optimize multiple objectives simultaneously in a given healthcare task.Deep neural network has been successfully used in healthcare,but it lacks interpretability.Medical data is highly sensitive,and privacy leaks are frequently reported in the healthcare industry.To solve these problems,in healthcare,we propose an interpretable neuroevolution framework based on federated learning to address search efficiency and privacy protection.Moreover,we also point out future research directions for evolutionary NAS.Overall,for researchers who want to use EAs to optimize NNs in healthcare,we analyze the advantages and disadvantages of doing so to provide detailed guidance,and propose an interpretable privacy-preserving framework for healthcare applications.
基金supported by National Key Research and Development Program of China (No. 2019YFB2102102)the Outstanding Youth Science Foundation (No. 61822602)+3 种基金National Natural Science Foundations of China (Nos. 62176094, 61772207 and 61873097)the Key-Area Research and Development of Guangdong Province (No. 2020B010166002)Guangdong Natural Science Foundation Research Team (No. 2018B030312003)National Research Foundation of Korea (No. NRF-2021H1D3A2A01082705)。
文摘Expensive optimization problem(EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation(EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently.
基金This work was supported by the Natural Science Foundation of China(Nos.61672478 and 61806090)the National Key Research and Development Program of China(No.2017YFB1003102)+4 种基金the Guangdong Provincial Key Laboratory(No.2020B121201001)the Shenzhen Peacock Plan(No.KQTD2016112514355531)the Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-inspired Intelligence Fund(No.2019028)the Fellowship of China Postdoctoral Science Foundation(No.2020M671900)the National Leading Youth Talent Support Program of China.
文摘Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.
基金This work was supported in part by the National Research Foundation of Korea(NRF)Grant funded by the Korean Government(MSIT)(No.NRF-2019R1A2C2084677)the 2021 Research Fund(1.210052.01)of UNIST(Ulsan National Institute of Science and Technology).
文摘Evolutionary Computation(EC)has strengths in terms of computation for gait optimization.However,conventional evolutionary algorithms use typical gait parameters such as step length and swing height,which limit the trajectory deformation for optimization of the foot trajectory.Furthermore,the quantitative index of fitness convergence is insufficient.In this paper,we perform gait optimization of a quadruped robot using foot placement perturbation based on EC.The proposed algorithm has an atypical solution search range,which is generated by independent manipulation of each placement that forms the foot trajectory.A convergence index is also introduced to prevent premature cessation of learning.The conventional algorithm and the proposed algorithm are applied to a quadruped robot;walking performances are then compared by gait simulation.Although the two algorithms exhibit similar computation rates,the proposed algorithm shows better fitness and a wider search range.The evolutionary tendency of the walking trajectory is analyzed using the optimized results,and the findings provide insight into reliable leg trajectory design.
文摘Purpose–The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning.Generally,by defining a proper penalty function,regularization laws are embedded into the structure of common least square solutions to increase the numerical stability,sparsity,accuracy and robustness of regression weights.Several regularization techniques have been proposed so far which have their own advantages and disadvantages.Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques.However,the proposed numerical and deterministic approaches need certain knowledge of mathematical programming,and also do not guarantee the global optimality of the obtained solution.In this research,the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine(ELM).Design/methodology/approach–To implement the required tools for comparative numerical study,three steps are taken.The considered algorithms contain both classical and swarm and evolutionary approaches.For the classical regularization techniques,Lasso regularization,Tikhonov regularization,cascade Lasso-Tikhonov regularization,and elastic net are considered.For swarm and evolutionary-based regularization,an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered,and its algorithmic structure is modified so that it can efficiently perform the regularized learning.Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme.To test the efficacy of the proposed constraint evolutionary-based regularization technique,a wide range of regression problems are used.Besides,the proposed framework is applied to a real-life identification problem,i.e.identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine,for further assurance on the performance of the proposed scheme.Findings–Through extensive numerical study,it is observed that the proposed scheme can be easily used for regularized machine learning.It is indicated that by defining a proper objective function and considering an appropriate penalty function,near global optimum values of regressors can be easily obtained.The results attest the high potentials of swarm and evolutionary techniques for fast,accurate and robust regularized machine learning.Originality/value–The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine(OP-ELM).The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system,and also increases the degree of the automation of OP-ELM.Besides,by using different types of metaheuristics,it is demonstrated that the proposed methodology is a general flexible scheme,and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.
基金This work was supported in part by the National Key Research and Development Program of China(2018AAA0100100)the National Natural Science Foundation of China(61822301,61876123,61906001)+2 种基金the Collaborative Innovation Program of Universities in Anhui Province(GXXT-2020-051)the Hong Kong Scholars Program(XJ2019035)Anhui Provincial Natural Science Foundation(1908085QF271).
文摘During the last three decades,evolutionary algorithms(EAs)have shown superiority in solving complex optimization problems,especially those with multiple objectives and non-differentiable landscapes.However,due to the stochastic search strategies,the performance of most EAs deteriorates drastically when handling a large number of decision variables.To tackle the curse of dimensionality,this work proposes an efficient EA for solving super-large-scale multi-objective optimization problems with sparse optimal solutions.The proposed algorithm estimates the sparse distribution of optimal solutions by optimizing a binary vector for each solution,and provides a fast clustering method to highly reduce the dimensionality of the search space.More importantly,all the operations related to the decision variables only contain several matrix calculations,which can be directly accelerated by GPUs.While existing EAs are capable of handling fewer than 10000 real variables,the proposed algorithm is verified to be effective in handling 1000000 real variables.Furthermore,since the proposed algorithm handles the large number of variables via accelerated matrix calculations,its runtime can be reduced to less than 10%of the runtime of existing EAs.
基金by National Key Research and Development Project,Ministry of Science and Technology,China(No.2018AAA0101300)National Natural Science Foundation of China(Nos.61976093 and 61873097)+1 种基金Guangdong-Hong Kong Joint Innovative Platform of Big Data and Computational Intelligence(No.2018B050502006)Guangdong Natural Science Foundation Research Team(No.2018B030312003).
文摘Social propagation denotes the spread phenomena directly correlated to the human world and society, which includes but is not limited to the diffusion of human epidemics, human-made malicious viruses, fake news, social innovation, viral marketing, etc. Simulation and optimization are two major themes in social propagation, where network-based simulation helps to analyze and understand the social contagion, and problem-oriented optimization is devoted to contain or improve the infection results. Though there have been many models and optimization techniques, the matter of concern is that the increasing complexity and scales of propagation processes continuously refresh the former conclusions. Recently, evolutionary computation(EC) shows its potential in alleviating the concerns by introducing an evolving and developing perspective. With this insight, this paper intends to develop a comprehensive view of how EC takes effect in social propagation. Taxonomy is provided for classifying the propagation problems, and the applications of EC in solving these problems are reviewed. Furthermore, some open issues of social propagation and the potential applications of EC are discussed.This paper contributes to recognizing the problems in application-oriented EC design and paves the way for the development of evolving propagation dynamics.
基金supported in part by the National Key Research and Development Program of China(2018AAA0100100)the National Natural Science Foundation of China(61906001,62136008,U21A20512)+1 种基金the Key Program of Natural Science Project of Educational Commission of Anhui Province(KJ2020A0036)Alexander von Humboldt Professorship for Artificial Intelligence Funded by the Federal Ministry of Education and Research,Germany。
文摘Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.
基金supported by the Faculty Development Competitive Research Grant program of Nazarbayev University(Grant No.021220FD5151)。
文摘Field penetration index(FPI) is one of the representative key parameters to examine the tunnel boring machine(TBM) performance.Lack of accurate FPI prediction can be responsible for numerous disastrous incidents associated with rock mechanics and engineering.This study aims to predict TBM performance(i.e.FPI) by an efficient and improved adaptive neuro-fuzzy inference system(ANFIS) model.This was done using an evolutionary algorithm,i.e.artificial bee colony(ABC) algorithm mixed with the ANFIS model.The role of ABC algorithm in this system is to find the optimum membership functions(MFs) of ANFIS model to achieve a higher degree of accuracy.The procedure and modeling were conducted on a tunnelling database comprising of more than 150 data samples where brittleness index(BI),fracture spacing,α angle between the plane of weakness and the TBM driven direction,and field single cutter load were assigned as model inputs to approximate FPI values.According to the results obtained by performance indices,the proposed ANFISABC model was able to receive the highest accuracy level in predicting FPI values compared with ANFIS model.In terms of coefficient of determination(R^(2)),the values of 0.951 and 0.901 were obtained for training and testing stages of the proposed ANFISABC model,respectively,which confirm its power and capability in solving TBM performance problem.The proposed model can be used in the other areas of rock mechanics and underground space technologies with similar conditions.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation(JPMJFS2115)。
文摘Wind energy has been widely applied in power generation to alleviate climate problems.The wind turbine layout of a wind farm is a primary factor of impacting power conversion efficiency due to the wake effect that reduces the power outputs of wind turbines located in downstream.Wind farm layout optimization(WFLO)aims to reduce the wake effect for maximizing the power outputs of the wind farm.Nevertheless,the wake effect among wind turbines increases significantly as the number of wind turbines increases in the wind farm,which severely affect power conversion efficiency.Conventional heuristic algorithms suffer from issues of low solution quality and local optimum for large-scale WFLO under complex wind scenarios.Thus,a chaotic local search-based genetic learning particle swarm optimizer(CGPSO)is proposed to optimize large-scale WFLO problems.CGPSO is tested on four larger-scale wind farms under four complex wind scenarios and compares with eight state-of-the-art algorithms.The experiment results indicate that CGPSO significantly outperforms its competitors in terms of performance,stability,and robustness.To be specific,a success and failure memories-based selection is proposed to choose a chaotic map for chaotic search local.It improves the solution quality.The parameter and search pattern of chaotic local search are also analyzed for WFLO problems.
基金supported in part by the National Key Research and Development Program of China under Grant 2019YFB2102102in part by the National Natural Science Foundations of China under Grant 62176094 and Grant 61873097+2 种基金in part by the Key‐Area Research and Development of Guangdong Province under Grant 2020B010166002in part by the Guangdong Natural Science Foundation Research Team under Grant 2018B030312003in part by the Guangdong‐Hong Kong Joint Innovation Platform under Grant 2018B050502006.
文摘Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.
文摘It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practitioner may use the results of association rule mining performed on this aggregated data to better personalize patient care and implement preventive measures.Historically,numerous heuristics(e.g.,greedy search)and metaheuristics-based techniques(e.g.,evolutionary algorithm)have been created for the positive association rule in privacy preserving data mining(PPDM).When it comes to connecting seemingly unrelated diseases and drugs,negative association rules may be more informative than their positive counterparts.It is well-known that during negative association rules mining,a large number of uninteresting rules are formed,making this a difficult problem to tackle.In this research,we offer an adaptive method for negative association rule mining in vertically partitioned healthcare datasets that respects users’privacy.The applied approach dynamically determines the transactions to be interrupted for information hiding,as opposed to predefining them.This study introduces a novel method for addressing the problem of negative association rules in healthcare data mining,one that is based on the Tabu-genetic optimization paradigm.Tabu search is advantageous since it removes a huge number of unnecessary rules and item sets.Experiments using benchmark healthcare datasets prove that the discussed scheme outperforms state-of-the-art solutions in terms of decreasing side effects and data distortions,as measured by the indicator of hiding failure.
基金supported by the Coordenacao de Aperfeicoamento de Pessoal de Nível Superior–Brasil (CAPES)–Finance Code 001the Postgraduate Programme in Forest Engineering of the Federal University of Lavras (PPGEF/UFLA)and Group of Optimization and Planning (GOPLAN/UFLA/LEMAF-Forest Management Research Lab)。
文摘Selective logging is well-recognized as an effective practice in sustainable forest management.However,the ecological efficiency or resilience of the residual stand is often in doubt.Recovery time depends on operational variables,diversity,and forest structure.Selective logging is excellent but is open to changes.This may be resolved by mathematical programming and this study integrates the economic-ecological aspects in multi-objective function by applying two evolutionary algorithms.The function maximizes remaining stand diversity,merchantable logs,and the inverse of distance between trees for harvesting and log landings points.The Brazilian rainforest database(566 trees)was used to simulate our 216-ha model.The log landing design has a maximum volume limit of 500 m3.The nondominated sorting genetic algorithm was applied to solve the main optimization problem.In parallel,a sub-problem(p-facility allocation)was solved for landing allocation by a genetic algorithm.Pareto frontier analysis was applied to distinguish the gradientsα-economic,β-ecological,andγ-equilibrium.As expected,the solutions have high diameter changes in the residual stand(average removal of approximately 16 m^(3) ha^(-1)).All solutions showed a grouping of trees selected for harvesting,although there was no formation of large clearings(percentage of canopy removal<7%,with an average of 2.5 ind ha^(-1)).There were no differences in floristic composition by preferentially selecting species with greater frequency in the initial stand for harvesting.This implies a lower impact on the demographic rates of the remaining stand.The methodology should support projects of reduced impact logging by using spatial-diversity information to guide better practices in tropical forests.
文摘The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more efficient,it is essential to assess the reusability of the components used.This paper proposes a software reusability prediction model named Flexible Random Fit(FRF)based on aging resilience for a Service Net(SN)software system.The reusability predic-tion model is developed based on a multilevel optimization technique based on software characteristics such as cohesion,coupling,and complexity.Metrics are obtained from the SN software system,which is then subjected to min-max nor-malization to avoid any saturation during the learning process.The feature extrac-tion process is made more feasible by enriching the data quality via outlier detection.The reusability of the classes is estimated based on a tool called Soft Audit.Software reusability can be predicted more effectively based on the pro-posed FRF-ANN(Flexible Random Fit-Artificial Neural Network)algorithm.Performance evaluation shows that the proposed algorithm outperforms all the other techniques,thus ensuring the optimization of software reusability based on aging resilient.The model is then tested using constraint-based testing techni-ques to make sure that it is perfect at optimizing and making predictions.
文摘Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a challenging task for farmers in earlier stage of crop growth because of similarity.To address this issue,an efficient weed classification model is proposed with the Deep Convolutional Neural Network(CNN)that implements automatic feature extraction and performs complex feature learning for image classification.Throughout this work,weed images were trained using the proposed CNN model with evolutionary computing approach to classify the weeds based on the two publicly available weed datasets.The Tamil Nadu Agricultural University(TNAU)dataset used as afirst dataset that consists of 40 classes of weed images and the other dataset is from Indian Council of Agriculture Research–Directorate of Weed Research(ICAR-DWR)which contains 50 classes of weed images.An effective Particle Swarm Optimization(PSO)technique is applied in the proposed CNN to automa-tically evolve and improve its classification accuracy.The proposed model was evaluated and compared with pre-trained transfer learning models such as GoogLeNet,AlexNet,Residual neural Network(ResNet)and Visual Geometry Group Network(VGGNet)for weed classification.This work shows that the performance of the PSO assisted proposed CNN model is significantly improved the success rate by 98.58%for TNAU and 97.79%for ICAR-DWR weed datasets.
基金This paper was supported in part by Natural Science Foundation of Jiangsu Province of China under Grant BK20191381in part by Jiangsu Planned Projects for Postdoctoral Research Funds under Grant 2019K223+2 种基金in part by the National Natural Science Foundation of China under Grant 61802208,Grant 61772286,Grant 61771258,and Grant 61701252in part by Project funded by China Postdoctoral Science Foundation Grant 2019M651923in part by Primary Research&Development Plan of Jiangsu Province under Grant BE2019742,and in part by NUPTSF under Grant NY220060,NY218035.
文摘Manufacturing service composition of the supply side and scheduling of the demand side are two important components of Cloud Manufacturing,which directly affect the quality of Cloud Manufacturing services.However,the previous studies on the two components are carried out independently and thus ignoring the internal relations and mutual constraints.Considering the two components on both sides of the supply and the demand of Cloud Manufacturing services at the same time,a Bilateral Collaborative Optimization Model of Cloud Manufacturing(BCOM-CMfg)is constructed in this paper.In BCOM-CMfg,to solve the manufacturing service scheduling problem on the supply side,a new efficient manufacturing service scheduling strategy is proposed.Then,as the input of the service composition problem on the demand side,the scheduling strategy is used to build the BCOM-CMfg.Furthermore,the Cooperation Level(CPL)between services is added as an evaluation index in BCOM-CMfg,which reveals the importance of the relationship between services.To improve the quality of manufacturing services more comprehensively.Finally,a Self-adaptive Multi-objective Pigeon-inspired Optimization algorithm(S-MOPIO)is proposed to solve the BCOM-CMfg.Simulation results show that the BCOM-CMfg model has advantages in reliability and cost and S-MOPIO can solve BCOM-CMfg effectively.
基金This work was partially supported by the Shandong Joint Fund of the National Nature Science Foundation of China(U2006228)the National Nature Science Foundation of China(61603244).
文摘Maintaining population diversity is an important task in the multimodal multi-objective optimization.Although the zoning search(ZS)can improve the diversity in the decision space,assigning the same computational costs to each search subspace may be wasteful when computational resources are limited,especially on imbalanced problems.To alleviate the above-mentioned issue,a zoning search with adaptive resource allocating(ZS-ARA)method is proposed in the current study.In the proposed ZS-ARA,the entire search space is divided into many subspaces to preserve the diversity in the decision space and to reduce the problem complexity.Moreover,the computational resources can be automatically allocated among all the subspaces.The ZS-ARA is compared with seven algorithms on two different types of multimodal multi-objective problems(MMOPs),namely,balanced and imbalanced MMOPs.The results indicate that,similarly to the ZS,the ZS-ARA achieves high performance with the balanced MMOPs.Also,it can greatly assist a“regular”algorithm in improving its performance on the imbalanced MMOPs,and is capable of allocating the limited computational resources dynamically.
文摘Some species of females,e.g.,chicken,bird,fish etc.,might mate with more than one males.In the mating of these polygamous creatures,there is competition between males as well as among their offspring.Thus,male reproductive success depends on both male competition and sperm rivalry.Inspired by this type of sexual life of roosters with chickens,a novel nature-inspired optimization algorithm called Roosters Algorithm(RA)is proposed.The algorithm was modelled and implemented based on the sexual behavior of roosters.13 well-known benchmark optimization functions and 10 IEEE CEC 2018 test functions are utilized to compare the performance of RA with the performance of well-known algorithms;Standard Genetic Algorithm(SGA),Differential Evolution(DE),Particle Swarm Optimization(PSO),Cuckoo Search(CS)and Grey Wolf Optimizer(GWO).Also,non-parametric statistical tests,Friedman and Wilcoxon Signed Rank Tests,were performed to demonstrate the significance of the results.In 20 of the 23 functions that were tested,RA either offered the best results or offered similar results to other compared algorithms.Thus,in this paper,we not only present a novel nature-inspired algorithm,but also offer an alternative method to the well-known algorithms commonly used in the literature,at least as effective as them.