This paper aims to frame a new rice disease prediction model that included three major phases.Initially,median filtering(MF)is deployed during pre-processing and then‘proposed Fuzzy Means Clustering(FCM)based segment...This paper aims to frame a new rice disease prediction model that included three major phases.Initially,median filtering(MF)is deployed during pre-processing and then‘proposed Fuzzy Means Clustering(FCM)based segmentation’is done.Following that,‘Discrete Wavelet Transform(DWT),Scale-Invariant Feature Transform(SIFT)and low-level features(colour and shape),Proposed local Binary Pattern(LBP)based features’are extracted that are classified via‘MultiLayer Perceptron(MLP)and Long Short Term Memory(LSTM)’and predicted outcomes are obtained.For exact prediction,this work intends to optimise the weights of LSTM using Inertia Weighted Salp Swarm Optimisation(IW-SSO)model.Eventually,the development of IW-SSO method is established on varied metrics.展开更多
Global meteorology data are now widely used in various areas, but one of its applications, weather analogues, still require exhaustive searches on the whole historical data. We present two optimisations for the state-...Global meteorology data are now widely used in various areas, but one of its applications, weather analogues, still require exhaustive searches on the whole historical data. We present two optimisations for the state-of-the-art weather analogue search algorithms: a parallelization and a heuristic search. The heuristic search (NDRank) limits of the final number of results and does initial searches on a lower resolution dataset to find candidates that, in the second phase, are locally validated. These optimisations were deployed in the Cloud and evaluated with ERA5 data from ECMWF. The proposed parallelization attained speedups close to optimal, and NDRank attains speedups higher than 4. NDRank can be applied to any parallel search, adding similar speedups. A substantial number of executions returned a set of analogues similar to the existing exhaustive search and most of the remaining results presented a numerical value difference lower than 0.1%. The results demonstrate that it is now possible to search for weather analogues in a faster way (even compared with parallel searches) with results with little to no error. Furthermore, NDRank can be applied to existing exhaustive searches, providing faster results with small reduction of the precision of the results.展开更多
This paper proposes a modified grey wolf optimiser-based adaptive super-twisting sliding mode control algorithm for the trajectory tracking and balancing of the rotary inverted pendulum system.The super-twisting slidi...This paper proposes a modified grey wolf optimiser-based adaptive super-twisting sliding mode control algorithm for the trajectory tracking and balancing of the rotary inverted pendulum system.The super-twisting sliding mode algorithm severely alleviates the chattering present in the classical sliding mode control.It provides robustness against model uncertainties and external disturbances with the knowledge of the upper bounds of the uncertainties and disturbances.The gains of the super-twisting sliding mode algorithm are selected through adaptive law.Parameters of the adaption law are tuned using a modified grey wolf optimisation algorithm,a meta-heuristic optimisation technique.Lyapunov stability analysis is carried out to analyse the overall control system stability.The performance of the proposed control algorithm is compared with two other sliding mode control strategies present in the literature,therein showing better performance of the proposed control scheme.展开更多
This paper focuses on the trajectory tracking of quadrotors under bounded external disturbances.An optimised robust controller is proposed to drive the position and attitude ofa quadrotor converge to their references ...This paper focuses on the trajectory tracking of quadrotors under bounded external disturbances.An optimised robust controller is proposed to drive the position and attitude ofa quadrotor converge to their references quickly. At first, nonsingular fast terminal slidingmode control is developed, which can guarantee not only the stability but also finite-timeconvergence of the closed-loop system. As the parameters of the designed controllers playa vital role for control performance, an improved beetle antennae search algorithm is proposedto optimise them. By employing the historical information of the beetle’s antennaeand dynamically updating the step size as well as the range of its searching, the optimisingis accelerated considerably to ensure the efficiency of the quadrotor control. The superiorityof the proposed control scheme is demonstrated by simulation experiments, from whichone can see that both the error and the overshooting of the trajectory tracking are reducedeffectively.展开更多
We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod proj...We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.展开更多
Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communicati...Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.展开更多
In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is...In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.展开更多
In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about t...In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.展开更多
Due to recent improvements in forensic DNA testing kit sensitivity,there has been an increased demand in the criminal justice community to revisit past convictions or cold cases.Some of these cases have little biologi...Due to recent improvements in forensic DNA testing kit sensitivity,there has been an increased demand in the criminal justice community to revisit past convictions or cold cases.Some of these cases have little biological evidence other than touch DNA in the form of archived latent fingerprint lift cards.In this study,a previously developed optimised workflow for this sample type was tested on aged fingerprints to determine if improved short tandem repeat(STR)profiles could be obtained.Two-year-old samples processed with the optimised workflow produced an average of approximately five more STR alleles per profile over the traditional method.The optimised workflow also produced detectable alleles in samples aged out to 28 years.Of the methods tested,the optimised workflow resulted in the most informative profiles from evidence samples more representative of the forensic need.This workflow is recommended for use with archived latent fingerprint samples,regardless of the archival time.展开更多
Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networ...Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
The development of photocatalytic technology has grown significantly since its initial report and as such,a number of screening methods have been developed to assess activity. In the field of environmental remediation...The development of photocatalytic technology has grown significantly since its initial report and as such,a number of screening methods have been developed to assess activity. In the field of environmental remediation, a crucial factor is the formation of highly oxidising species such as OH radicals. These radicals are often the primary driving force for the removal and breakdown of organic and inorganic contaminants. The quantification of such compounds is challenging due to the nature of the radical,however indirect methods which deploy a chemical probe to essentially capture the radical have been shown to be effective. As discussed in the work presented here, optimisation of such a method is fundamental to the efficiency of the method. A starting concentration range of coumarin from 50 mmol/L to 1000 mmol/L was used along with a catalyst loading of 0.01 g/L to 1 g/L Ti TiO2 to identify that 250 mmol/L and 0.5 g/L Ti TiO2 were the optimum conditions for production. Under these parameters a maximum production rate of 35.91 mmol/L(Rmax= 0.4 mmol/L OH* min-1) was achieved which yielded at photonic efficiency of 4.88 OH*moles photon-1 under UV irradiation. The data set presented also highlighted the limitations which are associated with the method which included; rapid exhaustion of the probe molecule and process inhibition through UV light saturation. Identifying both the optimum conditions and the potential limitations of the process were concluded to be key for the efficient deployment of the photocatalytic screening method.展开更多
Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based ...Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based on a predictive software and capable of generating optimised rolling schedules to obtain the desired mechanical properties in the final product is described.OptiLam includes some well-known metallurgical models which predict microstructural evolution during hot rolling and the transformation austenite/ferrite during the cooling.Furthermore,an optimisation algorithm,which is based on the gradient method,has been added,in order to design thermomechanical sequences when a specific final grain size is desired.OptiLam has been used to optimise rolling parameters,such as strain and temperature.Here,some of the results of the software validation performed by means of hot torsion tests are presented,showing also the functionality of the tool.Finally,the application of classical optimisation models,based on the gradient method,to hot rolling operations,is also discussed.展开更多
A method for packing irregular particles with a prescribed volume fraction is proposed.Furthermore,the generated granular material adheres to the prescribed statistical distribution and satisfies the desired complex s...A method for packing irregular particles with a prescribed volume fraction is proposed.Furthermore,the generated granular material adheres to the prescribed statistical distribution and satisfies the desired complex spatial arrangement.First,the irregular geometries of the realistic particles were obtained from the original particle images.Second,the Minkowski sum was used to check the overlap between irregular particles and place an irregular particle in contact with other particles.Third,the optimised advance front method(OAFM)generated irregular particle packing with the prescribed statistical dis-tribution and volume fraction based on the Minkowski sum.Moreover,the signed distance function was introduced to pack the particles in accordance with the desired spatial arrangement.Finally,seven biaxial tests were performed using the UDEC software,which demonstrated the accuracy and potential usefulness of the proposed method.It can model granular material efficiently and reflect the meso-structural characteristics of complex granular materials.This method has a wide range of applications where discrete modelling of granular media is necessary.展开更多
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ...Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.展开更多
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron...Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.展开更多
The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mas...The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mass is subjected to a zero-mean Gaussian white noise excitation,the main objective of this study is to maximise the efficiency of the targeted energy transfer in the system.A surrogate optimisation algorithm is proposed for this purpose and adopted for the stochastic framework.The optimisations are conducted separately for the nonlinear stiffness coefficient alone as well as for both the nonlinear stiffness and damping coefficients together.Three different optimisation cost functions,based on either energy of the system’s components or the dissipated energy,are considered.The results demonstrate some clear trends in values of the nonlinear energy sink coefficients and show the effect of different cost functions on the optimal values of the nonlinear system’s coefficients.展开更多
Introducing carbon trading into electricity market can convert carbon dioxide into schedulable resources with economic value.However,the randomness of wind power generation puts forward higher requirements for electri...Introducing carbon trading into electricity market can convert carbon dioxide into schedulable resources with economic value.However,the randomness of wind power generation puts forward higher requirements for electricity market transactions.Therefore,the carbon trading market is introduced into the wind power market,and a new form of low-carbon economic dispatch model is developed.First,the economic dispatch goal of wind power is be considered.It is projected to save money and reduce the cost of power generation for the system.The model includes risk operating costs to account for the impact of wind power output variability on the system,as well as wind farm negative efficiency operating costs to account for the loss caused by wind abandonment.The model also employs carbon trading market metrics to achieve the goal of lowering system carbon emissions,and analyze the impact of different carbon trading prices on the system.A low-carbon economic dispatch model for the wind power market is implemented based on the following two goals.Finally,the solution is optimised using the Ant-lion optimisation method,which combines Levi's flight mechanism and golden sine.The proposed model and algorithm's rationality is proven through the use of cases.展开更多
Change point detection becomes increasingly important because it can support data analysis by providing labels to the data in an unsupervised manner.In the context of process data analytics,change points in the time s...Change point detection becomes increasingly important because it can support data analysis by providing labels to the data in an unsupervised manner.In the context of process data analytics,change points in the time series of process variables may have an important indication about the process operation.For example,in a batch process,the change points can correspond to the operations and phases defined by the batch recipe.Hence identifying change points can assist labelling the time series data.Various unsupervised algorithms have been developed for change point detection,including the optimisation approachwhich minimises a cost functionwith certain penalties to search for the change points.The Bayesian approach is another,which uses Bayesian statistics to calculate the posterior probability of a specific sample being a change point.The paper investigates how the two approaches for change point detection can be applied to process data analytics.In addition,a new type of cost function using Tikhonov regularisation is proposed for the optimisation approach to reduce irrelevant change points caused by randomness in the data.The novelty lies in using regularisation-based cost functions to handle ill-posed problems of noisy data.The results demonstrate that change point detection is useful for process data analytics because change points can produce data segments corresponding to different operating modes or varying conditions,which will be useful for other machine learning tasks.展开更多
Settlement prediction of geosynthetic-reinforced soil(GRS)abutments under service loading conditions is an arduous and challenging task for practicing geotechnical/civil engineers.Hence,in this paper,a novel hybrid ar...Settlement prediction of geosynthetic-reinforced soil(GRS)abutments under service loading conditions is an arduous and challenging task for practicing geotechnical/civil engineers.Hence,in this paper,a novel hybrid artificial intelligence(AI)-based model was developed by the combination of artificial neural network(ANN)and Harris hawks’optimisation(HHO),that is,ANN-HHO,to predict the settlement of the GRS abutments.Five other robust intelligent models such as support vector regression(SVR),Gaussian process regression(GPR),relevance vector machine(RVM),sequential minimal optimisation regression(SMOR),and least-median square regression(LMSR)were constructed and compared to the ANN-HHO model.The predictive strength,relalibility and robustness of the model were evaluated based on rigorous statistical testing,ranking criteria,multi-criteria approach,uncertainity analysis and sensitivity analysis(SA).Moreover,the predictive veracity of the model was also substantiated against several large-scale independent experimental studies on GRS abutments reported in the scientific literature.The acquired findings demonstrated that the ANN-HHO model predicted the settlement of GRS abutments with reasonable accuracy and yielded superior performance in comparison to counterpart models.Therefore,it becomes one of predictive tools employed by geotechnical/civil engineers in preliminary decision-making when investigating the in-service performance of GRS abutments.Finally,the model has been converted into a simple mathematical formulation for easy hand calculations,and it is proved cost-effective and less time-consuming in comparison to experimental tests and numerical simulations.展开更多
文摘This paper aims to frame a new rice disease prediction model that included three major phases.Initially,median filtering(MF)is deployed during pre-processing and then‘proposed Fuzzy Means Clustering(FCM)based segmentation’is done.Following that,‘Discrete Wavelet Transform(DWT),Scale-Invariant Feature Transform(SIFT)and low-level features(colour and shape),Proposed local Binary Pattern(LBP)based features’are extracted that are classified via‘MultiLayer Perceptron(MLP)and Long Short Term Memory(LSTM)’and predicted outcomes are obtained.For exact prediction,this work intends to optimise the weights of LSTM using Inertia Weighted Salp Swarm Optimisation(IW-SSO)model.Eventually,the development of IW-SSO method is established on varied metrics.
基金the Fundação para a Ciência e a Tecnologia[UIDB/50021/2020].
文摘Global meteorology data are now widely used in various areas, but one of its applications, weather analogues, still require exhaustive searches on the whole historical data. We present two optimisations for the state-of-the-art weather analogue search algorithms: a parallelization and a heuristic search. The heuristic search (NDRank) limits of the final number of results and does initial searches on a lower resolution dataset to find candidates that, in the second phase, are locally validated. These optimisations were deployed in the Cloud and evaluated with ERA5 data from ECMWF. The proposed parallelization attained speedups close to optimal, and NDRank attains speedups higher than 4. NDRank can be applied to any parallel search, adding similar speedups. A substantial number of executions returned a set of analogues similar to the existing exhaustive search and most of the remaining results presented a numerical value difference lower than 0.1%. The results demonstrate that it is now possible to search for weather analogues in a faster way (even compared with parallel searches) with results with little to no error. Furthermore, NDRank can be applied to existing exhaustive searches, providing faster results with small reduction of the precision of the results.
文摘This paper proposes a modified grey wolf optimiser-based adaptive super-twisting sliding mode control algorithm for the trajectory tracking and balancing of the rotary inverted pendulum system.The super-twisting sliding mode algorithm severely alleviates the chattering present in the classical sliding mode control.It provides robustness against model uncertainties and external disturbances with the knowledge of the upper bounds of the uncertainties and disturbances.The gains of the super-twisting sliding mode algorithm are selected through adaptive law.Parameters of the adaption law are tuned using a modified grey wolf optimisation algorithm,a meta-heuristic optimisation technique.Lyapunov stability analysis is carried out to analyse the overall control system stability.The performance of the proposed control algorithm is compared with two other sliding mode control strategies present in the literature,therein showing better performance of the proposed control scheme.
基金Fujian Provincial Science and Technology Major Project(No.2020HZ02014)Education and Teaching Reform Research Project for Colleges and Universities in Fujian Province(No.FBJG20210239)Huaqiao University Graduate Education Teaching Reform Research Funding Project(No.20YJG009).
文摘This paper focuses on the trajectory tracking of quadrotors under bounded external disturbances.An optimised robust controller is proposed to drive the position and attitude ofa quadrotor converge to their references quickly. At first, nonsingular fast terminal slidingmode control is developed, which can guarantee not only the stability but also finite-timeconvergence of the closed-loop system. As the parameters of the designed controllers playa vital role for control performance, an improved beetle antennae search algorithm is proposedto optimise them. By employing the historical information of the beetle’s antennaeand dynamically updating the step size as well as the range of its searching, the optimisingis accelerated considerably to ensure the efficiency of the quadrotor control. The superiorityof the proposed control scheme is demonstrated by simulation experiments, from whichone can see that both the error and the overshooting of the trajectory tracking are reducedeffectively.
文摘We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.
基金supported in part by the National Natural Science Foundation of China (62376288,U23A20347)the Engineering and Physical Sciences Research Council of UK (EP/X041239/1)the Royal Society International Exchanges Scheme of UK (IEC/NSFC/211404)。
文摘Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.
基金supported by the National Natural Science Foundation of China(52177081).
文摘In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.
文摘In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.
基金This work was supported by the Department of Forensic Science of Virgina Commonwealth University and National Institute of Justice(NIJ)Award 2014-DNBX-K013.
文摘Due to recent improvements in forensic DNA testing kit sensitivity,there has been an increased demand in the criminal justice community to revisit past convictions or cold cases.Some of these cases have little biological evidence other than touch DNA in the form of archived latent fingerprint lift cards.In this study,a previously developed optimised workflow for this sample type was tested on aged fingerprints to determine if improved short tandem repeat(STR)profiles could be obtained.Two-year-old samples processed with the optimised workflow produced an average of approximately five more STR alleles per profile over the traditional method.The optimised workflow also produced detectable alleles in samples aged out to 28 years.Of the methods tested,the optimised workflow resulted in the most informative profiles from evidence samples more representative of the forensic need.This workflow is recommended for use with archived latent fingerprint samples,regardless of the archival time.
文摘Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金the financial support of Northern Irelands Department of Education and Learning for funding Caitlin Buck’s Ph DQueen’s University Belfast Pioneering Research Programme (PRP) for funding the research of Dr Nathan Skillen
文摘The development of photocatalytic technology has grown significantly since its initial report and as such,a number of screening methods have been developed to assess activity. In the field of environmental remediation, a crucial factor is the formation of highly oxidising species such as OH radicals. These radicals are often the primary driving force for the removal and breakdown of organic and inorganic contaminants. The quantification of such compounds is challenging due to the nature of the radical,however indirect methods which deploy a chemical probe to essentially capture the radical have been shown to be effective. As discussed in the work presented here, optimisation of such a method is fundamental to the efficiency of the method. A starting concentration range of coumarin from 50 mmol/L to 1000 mmol/L was used along with a catalyst loading of 0.01 g/L to 1 g/L Ti TiO2 to identify that 250 mmol/L and 0.5 g/L Ti TiO2 were the optimum conditions for production. Under these parameters a maximum production rate of 35.91 mmol/L(Rmax= 0.4 mmol/L OH* min-1) was achieved which yielded at photonic efficiency of 4.88 OH*moles photon-1 under UV irradiation. The data set presented also highlighted the limitations which are associated with the method which included; rapid exhaustion of the probe molecule and process inhibition through UV light saturation. Identifying both the optimum conditions and the potential limitations of the process were concluded to be key for the efficient deployment of the photocatalytic screening method.
基金supported by the project "Quality improvement by metallurgical optimised stock temperature evolution in the reheating furnace including microstructure feedback from the rolling mill" (OPTHEAT RFSR-CT-2006-00007) of the Research Fund for Coal and Steel (RFCS) from the European Union
文摘Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based on a predictive software and capable of generating optimised rolling schedules to obtain the desired mechanical properties in the final product is described.OptiLam includes some well-known metallurgical models which predict microstructural evolution during hot rolling and the transformation austenite/ferrite during the cooling.Furthermore,an optimisation algorithm,which is based on the gradient method,has been added,in order to design thermomechanical sequences when a specific final grain size is desired.OptiLam has been used to optimise rolling parameters,such as strain and temperature.Here,some of the results of the software validation performed by means of hot torsion tests are presented,showing also the functionality of the tool.Finally,the application of classical optimisation models,based on the gradient method,to hot rolling operations,is also discussed.
基金The authors would like to acknowledge the financial support provided by the National Key R&D Program of China(Grant No.2018YFC1504802)the National Natural Science Foundation of China(Grant Nos.41972266,12102230).
文摘A method for packing irregular particles with a prescribed volume fraction is proposed.Furthermore,the generated granular material adheres to the prescribed statistical distribution and satisfies the desired complex spatial arrangement.First,the irregular geometries of the realistic particles were obtained from the original particle images.Second,the Minkowski sum was used to check the overlap between irregular particles and place an irregular particle in contact with other particles.Third,the optimised advance front method(OAFM)generated irregular particle packing with the prescribed statistical dis-tribution and volume fraction based on the Minkowski sum.Moreover,the signed distance function was introduced to pack the particles in accordance with the desired spatial arrangement.Finally,seven biaxial tests were performed using the UDEC software,which demonstrated the accuracy and potential usefulness of the proposed method.It can model granular material efficiently and reflect the meso-structural characteristics of complex granular materials.This method has a wide range of applications where discrete modelling of granular media is necessary.
基金supported in part by the National Key Research and Development Program of China under Grant 2019YFB2102102in part by the National Natural Science Foundations of China under Grant 62176094 and Grant 61873097+2 种基金in part by the Key‐Area Research and Development of Guangdong Province under Grant 2020B010166002in part by the Guangdong Natural Science Foundation Research Team under Grant 2018B030312003in part by the Guangdong‐Hong Kong Joint Innovation Platform under Grant 2018B050502006.
文摘Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.
基金partially supported by the Medical Research Council Confidence in Concept Award,UK(MC_PC_17171)Royal Society International Exchanges Cost Share Award,UK(RP202G0230)+6 种基金British Heart Foundation Accelerator Award,UK(AA\18\3\34220)Hope Foundation for Cancer Research,UK(RM60G0680)Global Challenges Research Fund(GCRF),UK(P202PF11)Sino-UK Industrial Fund,UK(RP202G0289)LIAS Pioneering Partnerships Award,UK(P202ED10)Data Science Enhancement Fund,UK(P202RE237)Guangxi Key Laboratory of Trusted Software,CN(kx201901).
文摘Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.
基金funding for this work from NSF-CMMI 2009270 and EPSRC EP/V034391/1.
文摘The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mass is subjected to a zero-mean Gaussian white noise excitation,the main objective of this study is to maximise the efficiency of the targeted energy transfer in the system.A surrogate optimisation algorithm is proposed for this purpose and adopted for the stochastic framework.The optimisations are conducted separately for the nonlinear stiffness coefficient alone as well as for both the nonlinear stiffness and damping coefficients together.Three different optimisation cost functions,based on either energy of the system’s components or the dissipated energy,are considered.The results demonstrate some clear trends in values of the nonlinear energy sink coefficients and show the effect of different cost functions on the optimal values of the nonlinear system’s coefficients.
基金National Natural Science Foundation of China,Grant/Award Number:51677059。
文摘Introducing carbon trading into electricity market can convert carbon dioxide into schedulable resources with economic value.However,the randomness of wind power generation puts forward higher requirements for electricity market transactions.Therefore,the carbon trading market is introduced into the wind power market,and a new form of low-carbon economic dispatch model is developed.First,the economic dispatch goal of wind power is be considered.It is projected to save money and reduce the cost of power generation for the system.The model includes risk operating costs to account for the impact of wind power output variability on the system,as well as wind farm negative efficiency operating costs to account for the loss caused by wind abandonment.The model also employs carbon trading market metrics to achieve the goal of lowering system carbon emissions,and analyze the impact of different carbon trading prices on the system.A low-carbon economic dispatch model for the wind power market is implemented based on the following two goals.Finally,the solution is optimised using the Ant-lion optimisation method,which combines Levi's flight mechanism and golden sine.The proposed model and algorithm's rationality is proven through the use of cases.
基金support by the Federal Ministry for Economic Affairs and Climate Action of Germany(BMWK)within the Innovation Platform“KEEN-Artificial Intelligence Incubator Laboratory in the Process Industry”(Grant No.01MK20014T)The research of L.B.is supported by the Swedish Research Council Grant VR 2018-03661。
文摘Change point detection becomes increasingly important because it can support data analysis by providing labels to the data in an unsupervised manner.In the context of process data analytics,change points in the time series of process variables may have an important indication about the process operation.For example,in a batch process,the change points can correspond to the operations and phases defined by the batch recipe.Hence identifying change points can assist labelling the time series data.Various unsupervised algorithms have been developed for change point detection,including the optimisation approachwhich minimises a cost functionwith certain penalties to search for the change points.The Bayesian approach is another,which uses Bayesian statistics to calculate the posterior probability of a specific sample being a change point.The paper investigates how the two approaches for change point detection can be applied to process data analytics.In addition,a new type of cost function using Tikhonov regularisation is proposed for the optimisation approach to reduce irrelevant change points caused by randomness in the data.The novelty lies in using regularisation-based cost functions to handle ill-posed problems of noisy data.The results demonstrate that change point detection is useful for process data analytics because change points can produce data segments corresponding to different operating modes or varying conditions,which will be useful for other machine learning tasks.
文摘Settlement prediction of geosynthetic-reinforced soil(GRS)abutments under service loading conditions is an arduous and challenging task for practicing geotechnical/civil engineers.Hence,in this paper,a novel hybrid artificial intelligence(AI)-based model was developed by the combination of artificial neural network(ANN)and Harris hawks’optimisation(HHO),that is,ANN-HHO,to predict the settlement of the GRS abutments.Five other robust intelligent models such as support vector regression(SVR),Gaussian process regression(GPR),relevance vector machine(RVM),sequential minimal optimisation regression(SMOR),and least-median square regression(LMSR)were constructed and compared to the ANN-HHO model.The predictive strength,relalibility and robustness of the model were evaluated based on rigorous statistical testing,ranking criteria,multi-criteria approach,uncertainity analysis and sensitivity analysis(SA).Moreover,the predictive veracity of the model was also substantiated against several large-scale independent experimental studies on GRS abutments reported in the scientific literature.The acquired findings demonstrated that the ANN-HHO model predicted the settlement of GRS abutments with reasonable accuracy and yielded superior performance in comparison to counterpart models.Therefore,it becomes one of predictive tools employed by geotechnical/civil engineers in preliminary decision-making when investigating the in-service performance of GRS abutments.Finally,the model has been converted into a simple mathematical formulation for easy hand calculations,and it is proved cost-effective and less time-consuming in comparison to experimental tests and numerical simulations.