The safety factor is a crucial quantitative index for evaluating slope stability.However,the traditional calculation methods suffer from unreasonable assumptions,complex soil composition,and inadequate consideration o...The safety factor is a crucial quantitative index for evaluating slope stability.However,the traditional calculation methods suffer from unreasonable assumptions,complex soil composition,and inadequate consideration of the influencing factors,leading to large errors in their calculations.Therefore,a stacking ensemble learning model(stacking-SSAOP)based on multi-layer regression algorithm fusion and optimized by the sparrow search algorithm is proposed for predicting the slope safety factor.In this method,the density,cohesion,friction angle,slope angle,slope height,and pore pressure ratio are selected as characteristic parameters from the 210 sets of established slope sample data.Random Forest,Extra Trees,AdaBoost,Bagging,and Support Vector regression are used as the base model(inner loop)to construct the first-level regression algorithm layer,and XGBoost is used as the meta-model(outer loop)to construct the second-level regression algorithm layer and complete the construction of the stacked learning model for improving the model prediction accuracy.The sparrow search algorithm is used to optimize the hyperparameters of the above six regression models and correct the over-and underfitting problems of the single regression model to further improve the prediction accuracy.The mean square error(MSE)of the predicted and true values and the fitting of the data are compared and analyzed.The MSE of the stacking-SSAOP model was found to be smaller than that of the single regression model(MSE=0.03917).Therefore,the former has a higher prediction accuracy and better data fitting.This study innovatively applies the sparrow search algorithm to predict the slope safety factor,showcasing its advantages over traditional methods.Additionally,our proposed stacking-SSAOP model integrates multiple regression algorithms to enhance prediction accuracy.This model not only refines the prediction accuracy of the slope safety factor but also offers a fresh approach to handling the intricate soil composition and other influencing factors,making it a precise and reliable method for slope stability evaluation.This research holds importance for the modernization and digitalization of slope safety assessments.展开更多
The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search sta...The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search stage and swooping stage.However,BES tends to drop-in local optimization and the maximum value of search space needs to be improved.To fill this research gap,we propose an improved bald eagle algorithm(CABES)that integrates Cauchy mutation and adaptive optimization to improve the performance of BES from local optima.Firstly,CABES introduces the Cauchy mutation strategy to adjust the step size of the selection stage,to select a better search range.Secondly,in the search stage,CABES updates the search position update formula by an adaptive weight factor to further promote the local optimization capability of BES.To verify the performance of CABES,the benchmark function of CEC2017 is used to simulate the algorithm.The findings of the tests are compared to those of the Particle Swarm Optimization algorithm(PSO),Whale Optimization Algorithm(WOA)and Archimedes Algorithm(AOA).The experimental results show that CABES can provide good exploration and development capabilities,and it has strong competitiveness in testing algorithms.Finally,CABES is applied to four constrained engineering problems and a groundwater engineeringmodel,which further verifies the effectiveness and efficiency of CABES in practical engineering problems.展开更多
The hardness of the integer factoring problem(IFP)plays a core role in the security of RSA-like cryptosystems that are widely used today.Besides Shor’s quantum algorithm that can solve IFP within polynomial time,quan...The hardness of the integer factoring problem(IFP)plays a core role in the security of RSA-like cryptosystems that are widely used today.Besides Shor’s quantum algorithm that can solve IFP within polynomial time,quantum annealing algorithms(QAA)also manifest certain advantages in factoring integers.In experimental aspects,the reported integers that were successfully factored by using the D-wave QAA platform are much larger than those being factored by using Shor-like quantum algorithms.In this paper,we report some interesting observations about the effects of QAA for solving IFP.More specifically,we introduce a metric,called T-factor that measures the density of occupied qubits to some extent when conducting IFP tasks by using D-wave.We find that T-factor has obvious effects on annealing times for IFP:The larger of T-factor,the quicker of annealing speed.The explanation of this phenomenon is also given.展开更多
Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calcul...Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calculating node carbon emission factors based on the carbon emissions flow theory requires real-time parameters of a power grid.Therefore,it cannot provide carbon factor information beforehand.To address this issue,a prediction model based on the graph attention network is proposed.The model uses a graph structure that is suitable for the topology of the power grid and designs a supervised network using the loads of the grid nodes and the corresponding carbon factor data.The network extracts features and transmits information more suitable for the power system and can flexibly adjust the equivalent topology,thereby increasing the diversity of the structure.Its input and output data are simple,without the power grid parameters.We demonstrated its effect by testing IEEE-39 bus and IEEE-118 bus systems with average error rates of 2.46%and 2.51%.展开更多
A multi-strategy hybrid whale optimization algorithm(MSHWOA)for complex constrained optimization problems is proposed to overcome the drawbacks of easily trapping into local optimum,slow convergence speed and low opti...A multi-strategy hybrid whale optimization algorithm(MSHWOA)for complex constrained optimization problems is proposed to overcome the drawbacks of easily trapping into local optimum,slow convergence speed and low optimization precision.Firstly,the population is initialized by introducing the theory of good point set,which increases the randomness and diversity of the population and lays the foundation for the global optimization of the algorithm.Then,a novel linearly update equation of convergence factor is designed to coordinate the abilities of exploration and exploitation.At the same time,the global exploration and local exploitation capabilities are improved through the siege mechanism of Harris Hawks optimization algorithm.Finally,the simulation experiments are conducted on the 6 benchmark functions and Wilcoxon rank sum test to evaluate the optimization performance of the improved algorithm.The experimental results show that the proposed algorithm has more significant improvement in optimization accuracy,convergence speed and robustness than the comparison algorithm.展开更多
This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a mul...This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a multi-strategy mechanism (BSFAOA). This algorithm introduces three strategies within the standard AOA framework: an adaptive balance factor SMOA based on sine functions, a search strategy combining Spiral Search and Brownian Motion, and a hybrid perturbation strategy based on Whale Fall Mechanism and Polynomial Differential Learning. The BSFAOA algorithm is analyzed in depth on the well-known 23 benchmark functions, CEC2019 test functions, and four real optimization problems. The experimental results demonstrate that the BSFAOA algorithm can better balance the exploration and exploitation capabilities, significantly enhancing the stability, convergence mode, and search efficiency of the AOA algorithm.展开更多
To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorit...To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.展开更多
This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linea...This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.展开更多
A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustab...A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors x p, x i , and x d are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.展开更多
Starting from an index mapping for one to multi-dimensions, a general in-placeand in-order prime factor FFT algorithm is proposed in this paper. In comparing with existingprime factor FFT algorithms, this algorithm sa...Starting from an index mapping for one to multi-dimensions, a general in-placeand in-order prime factor FFT algorithm is proposed in this paper. In comparing with existingprime factor FFT algorithms, this algorithm saves about half of the required storage capacityand possesses a higher efficiency. In addition, this algorithm can easily implement the DFT andIDFT in a single subroutine,展开更多
Non-negative matrix factorization (NMF) is a technique for dimensionality reduction by placing non-negativity constraints on the matrix. Based on the PARAFAC model, NMF was extended for three-dimension data decompos...Non-negative matrix factorization (NMF) is a technique for dimensionality reduction by placing non-negativity constraints on the matrix. Based on the PARAFAC model, NMF was extended for three-dimension data decomposition. The three-dimension nonnegative matrix factorization (NMF3) algorithm, which was concise and easy to implement, was given in this paper. The NMF3 algorithm implementation was based on elements but not on vectors. It could decompose a data array directly without unfolding, which was not similar to that the traditional algorithms do, It has been applied to the simulated data array decomposition and obtained reasonable results. It showed that NMF3 could be introduced for curve resolution in chemometrics.展开更多
This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting...This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting from a basis of its kernel which forms a Chebyshev asymptotic scale at an endpoint. These algorithms arise quite naturally in our asymptotic context and prove very simple in special cases and/or for scales with a small numbers of terms. All the results in the three Parts of this work are well illustrated by a class of asymptotic scales featuring interesting properties. Examples and counterexamples complete the exposition.展开更多
With the popularity of online payment, how to perform creditcard fraud detection more accurately has also become a hot issue. And withthe emergence of the adaptive boosting algorithm (Adaboost), credit cardfraud detec...With the popularity of online payment, how to perform creditcard fraud detection more accurately has also become a hot issue. And withthe emergence of the adaptive boosting algorithm (Adaboost), credit cardfraud detection has started to use this method in large numbers, but thetraditional Adaboost is prone to overfitting in the presence of noisy samples.Therefore, in order to alleviate this phenomenon, this paper proposes a newidea: using the number of consecutive sample misclassifications to determinethe noisy samples, while constructing a penalty factor to reconstruct thesample weight assignment. Firstly, the theoretical analysis shows that thetraditional Adaboost method is overfitting in a noisy training set, which leadsto the degradation of classification accuracy. To this end, the penalty factorconstructed by the number of consecutive misclassifications of samples isused to reconstruct the sample weight assignment to prevent the classifierfrom over-focusing on noisy samples, and its reasonableness is demonstrated.Then, by comparing the penalty strength of the three different penalty factorsproposed in this paper, a more reasonable penalty factor is selected.Meanwhile, in order to make the constructed model more in line with theactual requirements on training time consumption, the Adaboost algorithmwith adaptive weight trimming (AWTAdaboost) is used in this paper, so thepenalty factor-based AWTAdaboost (PF_AWTAdaboost) is finally obtained.Finally, PF_AWTAdaboost is experimentally validated against other traditionalmachine learning algorithms on credit card fraud datasets and otherdatasets. The results show that the PF_AWTAdaboost method has betterperformance, including detection accuracy, model recall and robustness, thanother methods on the credit card fraud dataset. And the PF_AWTAdaboostmethod also shows excellent generalization performance on other datasets.From the experimental results, it is shown that the PF_AWTAdaboost algorithmhas better classification performance.展开更多
Kernal factor analysis (KFA) with vafimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle com...Kernal factor analysis (KFA) with vafimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with vadmax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.展开更多
Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm wi...Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.展开更多
Research reports show that the accuracies of many explicit friction factor models, having different levels of accuracies and complexities, have been improved using genetic algorithm (GA), a global optimization approac...Research reports show that the accuracies of many explicit friction factor models, having different levels of accuracies and complexities, have been improved using genetic algorithm (GA), a global optimization approach. However, the computational cost associated with the use of GA has yet to be discussed. In this study, the parameters of sixteen explicit models for the estimation of friction factor in the turbulent flow regime were optimized using two popular global search methods namely genetic algorithm (GA) and simulated annealing (SA). Based on 1000 interval values of Reynolds number (Re) in the range of and 100 interval values of relative roughness () in the range of , corresponding friction factor (f) data were obtained by solving Colebrook-White equation using Microsoft Excel spreadsheet. These data were then used to modify the parameters of the selected explicit models. Although both GA and SA led to either moderate or significant improvements in the accuracies of the existing friction factor models, SA outperforms the GA. Moreover, the SA requires far less computational time than the GA to complete the corresponding optimization process. It can therefore be concluded that SA is a better global optimizer than GA in the process of finding an improved explicit friction factor model as an alternative to the implicit Colebrook-White equation in the turbulent flow regime.展开更多
In the inking system of an offset printing press,a vibrator roller distributes ink not only in the circumferential direction but also in the axial direction.In the control process,if ink amount is determined only by t...In the inking system of an offset printing press,a vibrator roller distributes ink not only in the circumferential direction but also in the axial direction.In the control process,if ink amount is determined only by the dot area coverage without considering the impact of vibrator roller's oscillation,the printing colour quality will be reduced.This paper describes a method of calculating the impact factor of vibrator roller' s oscillation.First,the oscillation performance is analyzed and sample data of impact factor is got.Then,a fuzzy controller used for the calculation of the impact factor is designed,and genetic algorithm is used to optimize membership functions and obtain the fuzzy control rules automatically.This fuzzy controller can be used to calculate impact factors for any printing condition,and the impact factors can be used for ink amount control in printing process and it is important for higher printing colour quality and lowering ink and paper waste.展开更多
Due to the development of E-Commerce, collaboration filtering (CF) recommendation algorithm becomes popular in recent years. It has some limitations such as cold start, data sparseness and low operation efficiency. In...Due to the development of E-Commerce, collaboration filtering (CF) recommendation algorithm becomes popular in recent years. It has some limitations such as cold start, data sparseness and low operation efficiency. In this paper, a CF recommendation algorithm is propose based on the latent factor model and improved spectral clustering (CFRALFMISC) to improve the forecasting precision. The latent factor model was firstly adopted to predict the missing score. Then, the cluster validity index was used to determine the number of clusters. Finally, the spectral clustering was improved by using the FCM algorithm to replace the K-means in the spectral clustering. The simulation results show that CFRALFMISC can effectively improve the recommendation precision compared with other algorithms.展开更多
TheHoney Badger Algorithm(HBA)is a novelmeta-heuristic algorithm proposed recently inspired by the foraging behavior of honey badgers.The dynamic search behavior of honey badgers with sniffing and wandering is divided...TheHoney Badger Algorithm(HBA)is a novelmeta-heuristic algorithm proposed recently inspired by the foraging behavior of honey badgers.The dynamic search behavior of honey badgers with sniffing and wandering is divided into exploration and exploitation in HBA,which has been applied in photovoltaic systems and optimization problems effectively.However,HBA tends to suffer from the local optimum and low convergence.To alleviate these challenges,an improved HBA(IHBA)through fusing multi-strategies is presented in the paper.It introduces Tent chaotic mapping and composite mutation factors to HBA,meanwhile,the random control parameter is improved,moreover,a diversified updating strategy of position is put forward to enhance the advantage between exploration and exploitation.IHBA is compared with 7 meta-heuristic algorithms in 10 benchmark functions and 5 engineering problems.The Wilcoxon Rank-sum Test,Friedman Test and Mann-WhitneyU Test are conducted after emulation.The results indicate the competitiveness and merits of the IHBA,which has better solution quality and convergence traits.The source code is currently available from:https://github.com/zhaotao789/IHBA.展开更多
A multi-qubit pure quantum state is called separable when it can be factored as the tensor product of 1-qubit pure quantum states.Factorizing a general multi-qubit pure quantum state into the tensor product of its fac...A multi-qubit pure quantum state is called separable when it can be factored as the tensor product of 1-qubit pure quantum states.Factorizing a general multi-qubit pure quantum state into the tensor product of its factors(pure states containing a smaller number of qubits)can be a challenging task,especially for highly entangled states.A new criterion based on the proportionality of the rows of certain associated matrices for the existence of certain factorization and a factorization algorithm that follows from this criterion for systematically extracting all the factors is developed in this paper.3-qubit pure states play a crucial role in quantum computing and quantum information processing.For various applications,the well-known 3-qubit GHZ state which contains two nonzero terms,and the 3-qubit W state which contains three nonzero terms,have been studied extensively.Using the new factorization algorithm developed here we perform a complete analysis vis-à-vis entanglement of 3-qubit states that contain exactly two nonzero terms and exactly three nonzero terms.展开更多
基金supported by the Basic Research Special Plan of Yunnan Provincial Department of Science and Technology-General Project(Grant No.202101AT070094)。
文摘The safety factor is a crucial quantitative index for evaluating slope stability.However,the traditional calculation methods suffer from unreasonable assumptions,complex soil composition,and inadequate consideration of the influencing factors,leading to large errors in their calculations.Therefore,a stacking ensemble learning model(stacking-SSAOP)based on multi-layer regression algorithm fusion and optimized by the sparrow search algorithm is proposed for predicting the slope safety factor.In this method,the density,cohesion,friction angle,slope angle,slope height,and pore pressure ratio are selected as characteristic parameters from the 210 sets of established slope sample data.Random Forest,Extra Trees,AdaBoost,Bagging,and Support Vector regression are used as the base model(inner loop)to construct the first-level regression algorithm layer,and XGBoost is used as the meta-model(outer loop)to construct the second-level regression algorithm layer and complete the construction of the stacked learning model for improving the model prediction accuracy.The sparrow search algorithm is used to optimize the hyperparameters of the above six regression models and correct the over-and underfitting problems of the single regression model to further improve the prediction accuracy.The mean square error(MSE)of the predicted and true values and the fitting of the data are compared and analyzed.The MSE of the stacking-SSAOP model was found to be smaller than that of the single regression model(MSE=0.03917).Therefore,the former has a higher prediction accuracy and better data fitting.This study innovatively applies the sparrow search algorithm to predict the slope safety factor,showcasing its advantages over traditional methods.Additionally,our proposed stacking-SSAOP model integrates multiple regression algorithms to enhance prediction accuracy.This model not only refines the prediction accuracy of the slope safety factor but also offers a fresh approach to handling the intricate soil composition and other influencing factors,making it a precise and reliable method for slope stability evaluation.This research holds importance for the modernization and digitalization of slope safety assessments.
基金Project of Key Science and Technology of the Henan Province (No.202102310259)Henan Province University Scientific and Technological Innovation Team (No.18IRTSTHN009).
文摘The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search stage and swooping stage.However,BES tends to drop-in local optimization and the maximum value of search space needs to be improved.To fill this research gap,we propose an improved bald eagle algorithm(CABES)that integrates Cauchy mutation and adaptive optimization to improve the performance of BES from local optima.Firstly,CABES introduces the Cauchy mutation strategy to adjust the step size of the selection stage,to select a better search range.Secondly,in the search stage,CABES updates the search position update formula by an adaptive weight factor to further promote the local optimization capability of BES.To verify the performance of CABES,the benchmark function of CEC2017 is used to simulate the algorithm.The findings of the tests are compared to those of the Particle Swarm Optimization algorithm(PSO),Whale Optimization Algorithm(WOA)and Archimedes Algorithm(AOA).The experimental results show that CABES can provide good exploration and development capabilities,and it has strong competitiveness in testing algorithms.Finally,CABES is applied to four constrained engineering problems and a groundwater engineeringmodel,which further verifies the effectiveness and efficiency of CABES in practical engineering problems.
基金the National Natural Science Foundation of China(NSFC)(Grant No.61972050)the Open Foundation of StateKey Laboratory ofNetworking and Switching Technology(Beijing University of Posts and Telecommunications)(SKLNST-2020-2-16).
文摘The hardness of the integer factoring problem(IFP)plays a core role in the security of RSA-like cryptosystems that are widely used today.Besides Shor’s quantum algorithm that can solve IFP within polynomial time,quantum annealing algorithms(QAA)also manifest certain advantages in factoring integers.In experimental aspects,the reported integers that were successfully factored by using the D-wave QAA platform are much larger than those being factored by using Shor-like quantum algorithms.In this paper,we report some interesting observations about the effects of QAA for solving IFP.More specifically,we introduce a metric,called T-factor that measures the density of occupied qubits to some extent when conducting IFP tasks by using D-wave.We find that T-factor has obvious effects on annealing times for IFP:The larger of T-factor,the quicker of annealing speed.The explanation of this phenomenon is also given.
基金This work is supposed by the Science and Technology Projects of China Southern Power Grid(YNKJXM20222402).
文摘Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calculating node carbon emission factors based on the carbon emissions flow theory requires real-time parameters of a power grid.Therefore,it cannot provide carbon factor information beforehand.To address this issue,a prediction model based on the graph attention network is proposed.The model uses a graph structure that is suitable for the topology of the power grid and designs a supervised network using the loads of the grid nodes and the corresponding carbon factor data.The network extracts features and transmits information more suitable for the power system and can flexibly adjust the equivalent topology,thereby increasing the diversity of the structure.Its input and output data are simple,without the power grid parameters.We demonstrated its effect by testing IEEE-39 bus and IEEE-118 bus systems with average error rates of 2.46%and 2.51%.
基金the National Natural Science Foundation of China(No.62176146)。
文摘A multi-strategy hybrid whale optimization algorithm(MSHWOA)for complex constrained optimization problems is proposed to overcome the drawbacks of easily trapping into local optimum,slow convergence speed and low optimization precision.Firstly,the population is initialized by introducing the theory of good point set,which increases the randomness and diversity of the population and lays the foundation for the global optimization of the algorithm.Then,a novel linearly update equation of convergence factor is designed to coordinate the abilities of exploration and exploitation.At the same time,the global exploration and local exploitation capabilities are improved through the siege mechanism of Harris Hawks optimization algorithm.Finally,the simulation experiments are conducted on the 6 benchmark functions and Wilcoxon rank sum test to evaluate the optimization performance of the improved algorithm.The experimental results show that the proposed algorithm has more significant improvement in optimization accuracy,convergence speed and robustness than the comparison algorithm.
文摘This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a multi-strategy mechanism (BSFAOA). This algorithm introduces three strategies within the standard AOA framework: an adaptive balance factor SMOA based on sine functions, a search strategy combining Spiral Search and Brownian Motion, and a hybrid perturbation strategy based on Whale Fall Mechanism and Polynomial Differential Learning. The BSFAOA algorithm is analyzed in depth on the well-known 23 benchmark functions, CEC2019 test functions, and four real optimization problems. The experimental results demonstrate that the BSFAOA algorithm can better balance the exploration and exploitation capabilities, significantly enhancing the stability, convergence mode, and search efficiency of the AOA algorithm.
基金The National High Technology Research and Development Program of China (863 Program)(No.2007AA01Z280)
文摘To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.
基金supported by the College of Agriculture,Shiraz University(Grant No.97GRC1M271143)funding from the UK Biotechnology and Biological Sciences Research Council(BBSRC)funded by BBSRC grant award BBS/E/C/000I0330–Soil to Nutrition project 3–Sustainable intensification:optimisation at multiple scales。
文摘This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.
文摘A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors x p, x i , and x d are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.
基金Supported by the National Natural Science Foundation of China
文摘Starting from an index mapping for one to multi-dimensions, a general in-placeand in-order prime factor FFT algorithm is proposed in this paper. In comparing with existingprime factor FFT algorithms, this algorithm saves about half of the required storage capacityand possesses a higher efficiency. In addition, this algorithm can easily implement the DFT andIDFT in a single subroutine,
文摘Non-negative matrix factorization (NMF) is a technique for dimensionality reduction by placing non-negativity constraints on the matrix. Based on the PARAFAC model, NMF was extended for three-dimension data decomposition. The three-dimension nonnegative matrix factorization (NMF3) algorithm, which was concise and easy to implement, was given in this paper. The NMF3 algorithm implementation was based on elements but not on vectors. It could decompose a data array directly without unfolding, which was not similar to that the traditional algorithms do, It has been applied to the simulated data array decomposition and obtained reasonable results. It showed that NMF3 could be introduced for curve resolution in chemometrics.
文摘This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting from a basis of its kernel which forms a Chebyshev asymptotic scale at an endpoint. These algorithms arise quite naturally in our asymptotic context and prove very simple in special cases and/or for scales with a small numbers of terms. All the results in the three Parts of this work are well illustrated by a class of asymptotic scales featuring interesting properties. Examples and counterexamples complete the exposition.
基金This research was funded by Innovation and Entrepreneurship Training Program for College Students in Hunan Province in 2022(3915).
文摘With the popularity of online payment, how to perform creditcard fraud detection more accurately has also become a hot issue. And withthe emergence of the adaptive boosting algorithm (Adaboost), credit cardfraud detection has started to use this method in large numbers, but thetraditional Adaboost is prone to overfitting in the presence of noisy samples.Therefore, in order to alleviate this phenomenon, this paper proposes a newidea: using the number of consecutive sample misclassifications to determinethe noisy samples, while constructing a penalty factor to reconstruct thesample weight assignment. Firstly, the theoretical analysis shows that thetraditional Adaboost method is overfitting in a noisy training set, which leadsto the degradation of classification accuracy. To this end, the penalty factorconstructed by the number of consecutive misclassifications of samples isused to reconstruct the sample weight assignment to prevent the classifierfrom over-focusing on noisy samples, and its reasonableness is demonstrated.Then, by comparing the penalty strength of the three different penalty factorsproposed in this paper, a more reasonable penalty factor is selected.Meanwhile, in order to make the constructed model more in line with theactual requirements on training time consumption, the Adaboost algorithmwith adaptive weight trimming (AWTAdaboost) is used in this paper, so thepenalty factor-based AWTAdaboost (PF_AWTAdaboost) is finally obtained.Finally, PF_AWTAdaboost is experimentally validated against other traditionalmachine learning algorithms on credit card fraud datasets and otherdatasets. The results show that the PF_AWTAdaboost method has betterperformance, including detection accuracy, model recall and robustness, thanother methods on the credit card fraud dataset. And the PF_AWTAdaboostmethod also shows excellent generalization performance on other datasets.From the experimental results, it is shown that the PF_AWTAdaboost algorithmhas better classification performance.
基金The National Defence Foundation of China (No.NEWL51435Qt220401)
文摘Kernal factor analysis (KFA) with vafimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with vadmax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.
文摘Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.
文摘Research reports show that the accuracies of many explicit friction factor models, having different levels of accuracies and complexities, have been improved using genetic algorithm (GA), a global optimization approach. However, the computational cost associated with the use of GA has yet to be discussed. In this study, the parameters of sixteen explicit models for the estimation of friction factor in the turbulent flow regime were optimized using two popular global search methods namely genetic algorithm (GA) and simulated annealing (SA). Based on 1000 interval values of Reynolds number (Re) in the range of and 100 interval values of relative roughness () in the range of , corresponding friction factor (f) data were obtained by solving Colebrook-White equation using Microsoft Excel spreadsheet. These data were then used to modify the parameters of the selected explicit models. Although both GA and SA led to either moderate or significant improvements in the accuracies of the existing friction factor models, SA outperforms the GA. Moreover, the SA requires far less computational time than the GA to complete the corresponding optimization process. It can therefore be concluded that SA is a better global optimizer than GA in the process of finding an improved explicit friction factor model as an alternative to the implicit Colebrook-White equation in the turbulent flow regime.
基金Supported by the National Science and Technology Supporting Program(No.2012BAF13B05-1)National Natural Science Foundation(No.51105009)Beijing Natural Science Foundation(No.3113025)
文摘In the inking system of an offset printing press,a vibrator roller distributes ink not only in the circumferential direction but also in the axial direction.In the control process,if ink amount is determined only by the dot area coverage without considering the impact of vibrator roller's oscillation,the printing colour quality will be reduced.This paper describes a method of calculating the impact factor of vibrator roller' s oscillation.First,the oscillation performance is analyzed and sample data of impact factor is got.Then,a fuzzy controller used for the calculation of the impact factor is designed,and genetic algorithm is used to optimize membership functions and obtain the fuzzy control rules automatically.This fuzzy controller can be used to calculate impact factors for any printing condition,and the impact factors can be used for ink amount control in printing process and it is important for higher printing colour quality and lowering ink and paper waste.
基金the National Natural Science Foundation of China (Grant No. 61762031)Guangxi Key Research and Development Plan (Gui Science AB17195029, Gui Science AB18126006)+3 种基金Guangxi key Laboratory Fund of Embedded Technology and Intelligent System, 2017 Innovation Project of Guangxi Graduate Education (No. YCSW2017156)2018 Innovation Project of Guangxi Graduate Education (No. YCSW2018157)Subsidies for the Project of Promoting the Ability of Young and Middleaged Scientific Research in Universities and Colleges of Guangxi (KY2016YB184)2016 Guilin Science and Technology Project (Gui Science 2016010202).
文摘Due to the development of E-Commerce, collaboration filtering (CF) recommendation algorithm becomes popular in recent years. It has some limitations such as cold start, data sparseness and low operation efficiency. In this paper, a CF recommendation algorithm is propose based on the latent factor model and improved spectral clustering (CFRALFMISC) to improve the forecasting precision. The latent factor model was firstly adopted to predict the missing score. Then, the cluster validity index was used to determine the number of clusters. Finally, the spectral clustering was improved by using the FCM algorithm to replace the K-means in the spectral clustering. The simulation results show that CFRALFMISC can effectively improve the recommendation precision compared with other algorithms.
基金supported by National Science Foundation of China(Grant No.52075152)Xining Big Data Service Administration.
文摘TheHoney Badger Algorithm(HBA)is a novelmeta-heuristic algorithm proposed recently inspired by the foraging behavior of honey badgers.The dynamic search behavior of honey badgers with sniffing and wandering is divided into exploration and exploitation in HBA,which has been applied in photovoltaic systems and optimization problems effectively.However,HBA tends to suffer from the local optimum and low convergence.To alleviate these challenges,an improved HBA(IHBA)through fusing multi-strategies is presented in the paper.It introduces Tent chaotic mapping and composite mutation factors to HBA,meanwhile,the random control parameter is improved,moreover,a diversified updating strategy of position is put forward to enhance the advantage between exploration and exploitation.IHBA is compared with 7 meta-heuristic algorithms in 10 benchmark functions and 5 engineering problems.The Wilcoxon Rank-sum Test,Friedman Test and Mann-WhitneyU Test are conducted after emulation.The results indicate the competitiveness and merits of the IHBA,which has better solution quality and convergence traits.The source code is currently available from:https://github.com/zhaotao789/IHBA.
文摘A multi-qubit pure quantum state is called separable when it can be factored as the tensor product of 1-qubit pure quantum states.Factorizing a general multi-qubit pure quantum state into the tensor product of its factors(pure states containing a smaller number of qubits)can be a challenging task,especially for highly entangled states.A new criterion based on the proportionality of the rows of certain associated matrices for the existence of certain factorization and a factorization algorithm that follows from this criterion for systematically extracting all the factors is developed in this paper.3-qubit pure states play a crucial role in quantum computing and quantum information processing.For various applications,the well-known 3-qubit GHZ state which contains two nonzero terms,and the 3-qubit W state which contains three nonzero terms,have been studied extensively.Using the new factorization algorithm developed here we perform a complete analysis vis-à-vis entanglement of 3-qubit states that contain exactly two nonzero terms and exactly three nonzero terms.