We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod proj...We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.展开更多
Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communicati...Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.展开更多
In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is...In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.展开更多
In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about t...In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.展开更多
Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networ...Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technolog...Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technology and sponsored by Zhejiang Univer-sity.Digital Twins and Applications aim to provide a specialised platform for researchers,practitioners,and industry experts to publish high-quality,state-of-the-art research on digital twin technologies and their applications.展开更多
Hardware-based sensing frameworks such as cooperative fuel research engines are conventionally used to monitor research octane number (RON) in the petroleum refining industry. Machine learning techniques are employed ...Hardware-based sensing frameworks such as cooperative fuel research engines are conventionally used to monitor research octane number (RON) in the petroleum refining industry. Machine learning techniques are employed to predict the RON of integrated naphtha reforming and isomerisation processes. A dynamic Aspen HYSYS model was used to generate data by introducing artificial uncertainties in the range of ±5% in process conditions, such as temperature, flow rates, etc. The generated data was used to train support vector machines (SVM), Gaussian process regression (GPR), artificial neural networks (ANN), regression trees (RT), and ensemble trees (ET). Hyperparameter tuning was performed to enhance the prediction capabilities of GPR, ANN, SVM, ET and RT models. Performance analysis of the models indicates that GPR, ANN, and SVM with R2 values of 0.99, 0.978, and 0.979 and RMSE values of 0.108, 0.262, and 0.258, respectively performed better than the remaining models and had the prediction capability to capture the RON dependence on predictor variables. ET and RT had an R2 value of 0.94 and 0.89, respectively. The GPR model was used as a surrogate model for fitness function evaluations in two optimisation frameworks based on genetic algorithm and particle swarm method. Optimal parameter values found by the optimisation methodology increased the RON value by 3.52%. The proposed methodology of surrogate-based optimisation will provide a platform for plant-level implementation to realise the concept of industry 4.0 in the refinery.展开更多
An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are in...An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.展开更多
Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for ...Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for direct engineering application in their natural form. In an attempt to make them more feasible for construction purposes, numerous materials and techniques have been used to stabilise the soil. In this study, the additives and techniques applied for stabilising expansive soils will be focused on,with respect to their efficiency in improving the engineering properties of the soils. Then we discussed the microstructural interaction, chemical process, economic implication, nanotechnology application, as well as waste reuse and sustainability. Some issues regarding the effective application of the emerging trends in expansive soil stabilisation were presented with three categories, namely geoenvironmental,standardisation and optimisation issues. Techniques like predictive modelling and exploring methods such as reliability-based design optimisation, response surface methodology, dimensional analysis, and artificial intelligence technology were also proposed in order to ensure that expansive soil stabilisation is efficient.展开更多
Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has becom...Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has become increasingly popular mainly due to the growing use of lightweight materials in transportation applications.However,SPR joining of these advanced light materials remains a challenge as these materials often lack a good combination of high strength and ductility to resist the large plastic deformation induced by the SPR process.In this paper,SPR joints of advanced materials and their corresponding failure mechanisms are discussed,aiming to provide the foundation for future improvement of SPR joint quality.This paper is divided into three major sections:1)joint failures focusing on joint defects originated from the SPR process and joint failure modes under different mechanical loading conditions,2)joint corrosion issues,and 3)joint optimisation via process parameters and advanced techniques.展开更多
In the present study, we developed a multi-component one-dimensional mathematical model for simulation and optimisation of a commercial catalytic slurry reactor for the direct synthesis of dimethyl ether (DME) from ...In the present study, we developed a multi-component one-dimensional mathematical model for simulation and optimisation of a commercial catalytic slurry reactor for the direct synthesis of dimethyl ether (DME) from syngas and CO2, operating in a churn-turbulent regime. DME productivity and CO conversion were optimised by tuning operating conditions, such as superficial gas velocity, catalyst concentration, catalyst mass over molar gas flow rate (W/F), syngas composition, pressure and temperature. Reactor modelling was accomplished utilising mass balance, global kinetic models and heterogeneous hydrodynamics. In the heterogeneous flow regime, gas was distributed into two bubble phases: small and large. Simulation results were validated using data obtained from a pilot plant. The developed model is also applicable for the design of large-scale slurry reactors.展开更多
This paper presents the effect of mooring diameters, fairlead slopes and pretensions on the dynamic responses of a truss spar platform in intact and damaged line conditions. The platform is modelled as a rigid body wi...This paper presents the effect of mooring diameters, fairlead slopes and pretensions on the dynamic responses of a truss spar platform in intact and damaged line conditions. The platform is modelled as a rigid body with three degrees-of-freedom and its motions are analysed in time-domain using the implicit Newmark Beta technique. The mooring restoring force-excursion relationship is evaluated using quasi-static approach. MATLAB codes DATSpar and QSAML, are developed to compute the dynamic responses of truss spar platform and to determine the mooring system stiffness. To eliminate the conventional trial and error approach in the mooring system design, a numerical tool is also developed and described in this paper for optimising the mooring configuration. It has a graphical user interface and includes regrouping particle swarm optimisation technique combined with DATSpar and QSAML. A case study of truss spar platform with ten mooring lines is analysed using this numerical tool. The results show that optimum mooring system design benefits the oil and gas industry to economise the project cost in terms of material, weight, structural load onto the platform as well as manpower requirements. This tool is useful especially for the preliminary design of truss spar platforms and its mooring system.展开更多
A general and new explicit isogeometric topology optimisation approach with moving morphable voids(MMV)is proposed.In this approach,a novel multiresolution scheme with two distinct discretisation levels is developed t...A general and new explicit isogeometric topology optimisation approach with moving morphable voids(MMV)is proposed.In this approach,a novel multiresolution scheme with two distinct discretisation levels is developed to obtain high-resolution designs with a relatively low computational cost.Ersatz material model based on Greville abscissae collocation scheme is utilised to represent both the Young’s modulus of the material and the density field.Two benchmark examples are tested to illustrate the effectiveness of the proposed method.Numerical results show that high-resolution designs can be obtained with relatively low computational cost,and the optimisation can be significantly improved without introducing additional DOFs.展开更多
Development of appropriate tourism infrastructure is important for protected areas that allow public access for tourism use.This is meant to avoid or minimize unfavourable impacts on natural resources through guiding ...Development of appropriate tourism infrastructure is important for protected areas that allow public access for tourism use.This is meant to avoid or minimize unfavourable impacts on natural resources through guiding tourists for proper use.In this paper,a GIS-based method,the least-cost path(LCP) modelling,is explored for planning tourist tracks in a World Heritage site in Northwest Yunnan(China),where tourism is increasing rapidly while appropriate infrastructure is almost absent.The modelling process contains three steps:1) selection of evaluation criteria(physical,biological and landscape scenic) that are relevant to track decision; 2) translation of evluation criteria into spatially explicit cost surfaces with GIS,and 3) use of Dijkstra's algorithm to determine the least-cost tracks.Four tracks that link main entrances and scenic spots of the study area are proposed after optimizing all evaluation criteria.These tracks feature lowenvironmental impacts and high landscape qualities,which represent a reasonable solution to balance tourist use and nature conservation in the study area.In addtion,the study proves that the LCP modelling can not only offer a structured framwork for track planning but also allow for different stakeholders to participate in the planning process.It therefore enhances the effectivenss of tourism planning and managemnt in protected areas.展开更多
Creep strength enhanced ferritic(CSEF) steels are used in advanced power plant systems for high temperature applications. P92(Cr–W–Mo–V)steel, classified under CSEF steels, is a candidate material for piping, tubin...Creep strength enhanced ferritic(CSEF) steels are used in advanced power plant systems for high temperature applications. P92(Cr–W–Mo–V)steel, classified under CSEF steels, is a candidate material for piping, tubing, etc., in ultra-super critical and advanced ultra-super critical boiler applications. In the present work, laser welding process has been optimised for P92 material by using Taguchi based grey relational analysis(GRA).Bead on plate(BOP) trials were carried out using a 3.5 k W diffusion cooled slab CO_2 laser by varying laser power, welding speed and focal position. The optimum parameters have been derived by considering the responses such as depth of penetration, weld width and heat affected zone(HAZ) width. Analysis of variance(ANOVA) has been used to analyse the effect of different parameters on the responses. Based on ANOVA, laser power of 3 k W, welding speed of 1 m/min and focal plane at-4 mm have evolved as optimised set of parameters. The responses of the optimised parameters obtained using the GRA have been verified experimentally and found to closely correlate with the predicted value.? 2016 China Ordnance Society. Production and hosting by Elsevier B.V. All rights reserved.展开更多
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ...Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.展开更多
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron...Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.展开更多
In this paper, the modelling and multi-objective optimal control of batch processes, using a recurrent neuro-fuzzy network, are presented. The recurrent neuro-fuzzy network, forms a "global" nonlinear long-range pre...In this paper, the modelling and multi-objective optimal control of batch processes, using a recurrent neuro-fuzzy network, are presented. The recurrent neuro-fuzzy network, forms a "global" nonlinear long-range prediction model through the fuzzy conjunction of a number of "local" linear dynamic models. Network output is fed back to network input through one or more time delay units, which ensure that predictions from the recurrent neuro-fuzzy network are long-range. In building a recurrent neural network model, process knowledge is used initially to partition the processes non-linear characteristics into several local operating regions, and to aid in the initialisation of corresponding network weights. Process operational data is then used to train the network. Membership functions of the local regimes are identified, and local models are discovered via network training. Based on a recurrent neuro-fuzzy network model, a multi-objective optimal control policy can be obtained. The proposed technique is applied to a fed-batch reactor.展开更多
The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mas...The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mass is subjected to a zero-mean Gaussian white noise excitation,the main objective of this study is to maximise the efficiency of the targeted energy transfer in the system.A surrogate optimisation algorithm is proposed for this purpose and adopted for the stochastic framework.The optimisations are conducted separately for the nonlinear stiffness coefficient alone as well as for both the nonlinear stiffness and damping coefficients together.Three different optimisation cost functions,based on either energy of the system’s components or the dissipated energy,are considered.The results demonstrate some clear trends in values of the nonlinear energy sink coefficients and show the effect of different cost functions on the optimal values of the nonlinear system’s coefficients.展开更多
文摘We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.
基金supported in part by the National Natural Science Foundation of China (62376288,U23A20347)the Engineering and Physical Sciences Research Council of UK (EP/X041239/1)the Royal Society International Exchanges Scheme of UK (IEC/NSFC/211404)。
文摘Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.
基金supported by the National Natural Science Foundation of China(52177081).
文摘In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.
文摘In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.
文摘Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
文摘Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technology and sponsored by Zhejiang Univer-sity.Digital Twins and Applications aim to provide a specialised platform for researchers,practitioners,and industry experts to publish high-quality,state-of-the-art research on digital twin technologies and their applications.
基金Higher Education Commission(HEC),Pakistan,under the National Research Program for Universities(NRPU)Project No.10,215/Federal.
文摘Hardware-based sensing frameworks such as cooperative fuel research engines are conventionally used to monitor research octane number (RON) in the petroleum refining industry. Machine learning techniques are employed to predict the RON of integrated naphtha reforming and isomerisation processes. A dynamic Aspen HYSYS model was used to generate data by introducing artificial uncertainties in the range of ±5% in process conditions, such as temperature, flow rates, etc. The generated data was used to train support vector machines (SVM), Gaussian process regression (GPR), artificial neural networks (ANN), regression trees (RT), and ensemble trees (ET). Hyperparameter tuning was performed to enhance the prediction capabilities of GPR, ANN, SVM, ET and RT models. Performance analysis of the models indicates that GPR, ANN, and SVM with R2 values of 0.99, 0.978, and 0.979 and RMSE values of 0.108, 0.262, and 0.258, respectively performed better than the remaining models and had the prediction capability to capture the RON dependence on predictor variables. ET and RT had an R2 value of 0.94 and 0.89, respectively. The GPR model was used as a surrogate model for fitness function evaluations in two optimisation frameworks based on genetic algorithm and particle swarm method. Optimal parameter values found by the optimisation methodology increased the RON value by 3.52%. The proposed methodology of surrogate-based optimisation will provide a platform for plant-level implementation to realise the concept of industry 4.0 in the refinery.
基金supported by the National Natural Science Foundation of China under grant nos.61772091,61802035,61962006,61962038,U1802271,U2001212,and 62072311the Sichuan Science and Technology Program under grant nos.2021JDJQ0021 and 22ZDYF2680+7 种基金the CCF‐Huawei Database System Innovation Research Plan under grant no.CCF‐HuaweiDBIR2020004ADigital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under grant no.21DMAKL02the Chengdu Major Science and Technology Innovation Project under grant no.2021‐YF08‐00156‐GXthe Chengdu Technology Innovation and Research and Development Project under grant no.2021‐YF05‐00491‐SNthe Natural Science Foundation of Guangxi under grant no.2018GXNSFDA138005the Guangdong Basic and Applied Basic Research Foundation under grant no.2020B1515120028the Science and Technology Innovation Seedling Project of Sichuan Province under grant no 2021006the College Student Innovation and Entrepreneurship Training Program of Chengdu University of Information Technology under grant nos.202110621179 and 202110621186.
文摘An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.
文摘Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for direct engineering application in their natural form. In an attempt to make them more feasible for construction purposes, numerous materials and techniques have been used to stabilise the soil. In this study, the additives and techniques applied for stabilising expansive soils will be focused on,with respect to their efficiency in improving the engineering properties of the soils. Then we discussed the microstructural interaction, chemical process, economic implication, nanotechnology application, as well as waste reuse and sustainability. Some issues regarding the effective application of the emerging trends in expansive soil stabilisation were presented with three categories, namely geoenvironmental,standardisation and optimisation issues. Techniques like predictive modelling and exploring methods such as reliability-based design optimisation, response surface methodology, dimensional analysis, and artificial intelligence technology were also proposed in order to ensure that expansive soil stabilisation is efficient.
文摘Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has become increasingly popular mainly due to the growing use of lightweight materials in transportation applications.However,SPR joining of these advanced light materials remains a challenge as these materials often lack a good combination of high strength and ductility to resist the large plastic deformation induced by the SPR process.In this paper,SPR joints of advanced materials and their corresponding failure mechanisms are discussed,aiming to provide the foundation for future improvement of SPR joint quality.This paper is divided into three major sections:1)joint failures focusing on joint defects originated from the SPR process and joint failure modes under different mechanical loading conditions,2)joint corrosion issues,and 3)joint optimisation via process parameters and advanced techniques.
文摘In the present study, we developed a multi-component one-dimensional mathematical model for simulation and optimisation of a commercial catalytic slurry reactor for the direct synthesis of dimethyl ether (DME) from syngas and CO2, operating in a churn-turbulent regime. DME productivity and CO conversion were optimised by tuning operating conditions, such as superficial gas velocity, catalyst concentration, catalyst mass over molar gas flow rate (W/F), syngas composition, pressure and temperature. Reactor modelling was accomplished utilising mass balance, global kinetic models and heterogeneous hydrodynamics. In the heterogeneous flow regime, gas was distributed into two bubble phases: small and large. Simulation results were validated using data obtained from a pilot plant. The developed model is also applicable for the design of large-scale slurry reactors.
基金partially supported by YUTP-FRG funded by PETRONAS
文摘This paper presents the effect of mooring diameters, fairlead slopes and pretensions on the dynamic responses of a truss spar platform in intact and damaged line conditions. The platform is modelled as a rigid body with three degrees-of-freedom and its motions are analysed in time-domain using the implicit Newmark Beta technique. The mooring restoring force-excursion relationship is evaluated using quasi-static approach. MATLAB codes DATSpar and QSAML, are developed to compute the dynamic responses of truss spar platform and to determine the mooring system stiffness. To eliminate the conventional trial and error approach in the mooring system design, a numerical tool is also developed and described in this paper for optimising the mooring configuration. It has a graphical user interface and includes regrouping particle swarm optimisation technique combined with DATSpar and QSAML. A case study of truss spar platform with ten mooring lines is analysed using this numerical tool. The results show that optimum mooring system design benefits the oil and gas industry to economise the project cost in terms of material, weight, structural load onto the platform as well as manpower requirements. This tool is useful especially for the preliminary design of truss spar platforms and its mooring system.
基金National Natural Science Foundation of China under Grant Nos.51675525 and 11725211.
文摘A general and new explicit isogeometric topology optimisation approach with moving morphable voids(MMV)is proposed.In this approach,a novel multiresolution scheme with two distinct discretisation levels is developed to obtain high-resolution designs with a relatively low computational cost.Ersatz material model based on Greville abscissae collocation scheme is utilised to represent both the Young’s modulus of the material and the density field.Two benchmark examples are tested to illustrate the effectiveness of the proposed method.Numerical results show that high-resolution designs can be obtained with relatively low computational cost,and the optimisation can be significantly improved without introducing additional DOFs.
基金funded by the CEMSIT project from the Flemish Inter-university Council of Belgiumthe grant(No.31160101)from National Natural Science Foundation of China
文摘Development of appropriate tourism infrastructure is important for protected areas that allow public access for tourism use.This is meant to avoid or minimize unfavourable impacts on natural resources through guiding tourists for proper use.In this paper,a GIS-based method,the least-cost path(LCP) modelling,is explored for planning tourist tracks in a World Heritage site in Northwest Yunnan(China),where tourism is increasing rapidly while appropriate infrastructure is almost absent.The modelling process contains three steps:1) selection of evaluation criteria(physical,biological and landscape scenic) that are relevant to track decision; 2) translation of evluation criteria into spatially explicit cost surfaces with GIS,and 3) use of Dijkstra's algorithm to determine the least-cost tracks.Four tracks that link main entrances and scenic spots of the study area are proposed after optimizing all evaluation criteria.These tracks feature lowenvironmental impacts and high landscape qualities,which represent a reasonable solution to balance tourist use and nature conservation in the study area.In addtion,the study proves that the LCP modelling can not only offer a structured framwork for track planning but also allow for different stakeholders to participate in the planning process.It therefore enhances the effectivenss of tourism planning and managemnt in protected areas.
基金the management of Bharat Heavy Electricals Ltd., for funding this research programme
文摘Creep strength enhanced ferritic(CSEF) steels are used in advanced power plant systems for high temperature applications. P92(Cr–W–Mo–V)steel, classified under CSEF steels, is a candidate material for piping, tubing, etc., in ultra-super critical and advanced ultra-super critical boiler applications. In the present work, laser welding process has been optimised for P92 material by using Taguchi based grey relational analysis(GRA).Bead on plate(BOP) trials were carried out using a 3.5 k W diffusion cooled slab CO_2 laser by varying laser power, welding speed and focal position. The optimum parameters have been derived by considering the responses such as depth of penetration, weld width and heat affected zone(HAZ) width. Analysis of variance(ANOVA) has been used to analyse the effect of different parameters on the responses. Based on ANOVA, laser power of 3 k W, welding speed of 1 m/min and focal plane at-4 mm have evolved as optimised set of parameters. The responses of the optimised parameters obtained using the GRA have been verified experimentally and found to closely correlate with the predicted value.? 2016 China Ordnance Society. Production and hosting by Elsevier B.V. All rights reserved.
基金supported in part by the National Key Research and Development Program of China under Grant 2019YFB2102102in part by the National Natural Science Foundations of China under Grant 62176094 and Grant 61873097+2 种基金in part by the Key‐Area Research and Development of Guangdong Province under Grant 2020B010166002in part by the Guangdong Natural Science Foundation Research Team under Grant 2018B030312003in part by the Guangdong‐Hong Kong Joint Innovation Platform under Grant 2018B050502006.
文摘Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.
基金partially supported by the Medical Research Council Confidence in Concept Award,UK(MC_PC_17171)Royal Society International Exchanges Cost Share Award,UK(RP202G0230)+6 种基金British Heart Foundation Accelerator Award,UK(AA\18\3\34220)Hope Foundation for Cancer Research,UK(RM60G0680)Global Challenges Research Fund(GCRF),UK(P202PF11)Sino-UK Industrial Fund,UK(RP202G0289)LIAS Pioneering Partnerships Award,UK(P202ED10)Data Science Enhancement Fund,UK(P202RE237)Guangxi Key Laboratory of Trusted Software,CN(kx201901).
文摘Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.
基金This work was supported by the UK EPSRC (GR/N13319, GR/R10875).
文摘In this paper, the modelling and multi-objective optimal control of batch processes, using a recurrent neuro-fuzzy network, are presented. The recurrent neuro-fuzzy network, forms a "global" nonlinear long-range prediction model through the fuzzy conjunction of a number of "local" linear dynamic models. Network output is fed back to network input through one or more time delay units, which ensure that predictions from the recurrent neuro-fuzzy network are long-range. In building a recurrent neural network model, process knowledge is used initially to partition the processes non-linear characteristics into several local operating regions, and to aid in the initialisation of corresponding network weights. Process operational data is then used to train the network. Membership functions of the local regimes are identified, and local models are discovered via network training. Based on a recurrent neuro-fuzzy network model, a multi-objective optimal control policy can be obtained. The proposed technique is applied to a fed-batch reactor.
基金funding for this work from NSF-CMMI 2009270 and EPSRC EP/V034391/1.
文摘The paper studies stochastic dynamics of a two-degree-of-freedom system,where a primary linear system is connected to a nonlinear energy sink with cubic stiffness nonlinearity and viscous damping.While the primary mass is subjected to a zero-mean Gaussian white noise excitation,the main objective of this study is to maximise the efficiency of the targeted energy transfer in the system.A surrogate optimisation algorithm is proposed for this purpose and adopted for the stochastic framework.The optimisations are conducted separately for the nonlinear stiffness coefficient alone as well as for both the nonlinear stiffness and damping coefficients together.Three different optimisation cost functions,based on either energy of the system’s components or the dissipated energy,are considered.The results demonstrate some clear trends in values of the nonlinear energy sink coefficients and show the effect of different cost functions on the optimal values of the nonlinear system’s coefficients.