Due to outstanding performance in cheminformatics,machine learning algorithms have been increasingly used to mine molecular properties and biomedical big data.The performance of machine learning models is known to cri...Due to outstanding performance in cheminformatics,machine learning algorithms have been increasingly used to mine molecular properties and biomedical big data.The performance of machine learning models is known to critically depend on the selection of the hyper-parameter configuration.However,many studies either explored the optimal hyper-parameters per the grid searching method or employed arbitrarily selected hyper-parameters,which can easily lead to achieving a suboptimal hyper-parameter configuration.In this study,Hyperopt library embedding with the Bayesian optimization is employed to find optimal hyper-parameters for different machine learning algorithms.Six drug discovery datasets,including solubility,probe-likeness,h ERG,Chagas disease,tuberculosis,and malaria,are used to compare different machine learning algorithms with ECFP6 fingerprints.This contribution aims to evaluate whether the Bernoulli Na?ve Bayes,logistic linear regression,Ada Boost decision tree,random forest,support vector machine,and deep neural networks algorithms with optimized hyper-parameters can offer any improvement in testing as compared with the referenced models assessed by an array of metrics including AUC,F1-score,Cohen’s kappa,Matthews correlation coefficient,recall,precision,and accuracy.Based on the rank normalized score approach,the Hyperopt models achieve better or comparable performance on 33 out 36 models for different drug discovery datasets,showing significant improvement achieved by employing the Hyperopt library.The open-source code of all the 6 machine learning frameworks employed in the Hyperopt python package is provided to make this approach accessible to more scientists,who are not familiar with writing code.展开更多
A theoretical methodology is suggested for finding the malaria parasites’presence with the help of an intelligent hyper-parameter tuned Deep Learning(DL)based malaria parasite detection and classification(HPTDL-MPDC)...A theoretical methodology is suggested for finding the malaria parasites’presence with the help of an intelligent hyper-parameter tuned Deep Learning(DL)based malaria parasite detection and classification(HPTDL-MPDC)in the smear images of human peripheral blood.Some existing approaches fail to predict the malaria parasitic features and reduce the prediction accuracy.The trained model initiated in the proposed system for classifying peripheral blood smear images into the non-parasite or parasite classes using the available online dataset.The Adagrad optimizer is stacked with the suggested pre-trained Deep Neural Network(DNN)with the help of the contrastive divergence method to pre-train.The features are extracted from the images in the proposed system to train the DNN for initializing the visible variables.The smear images show the concatenated feature to be utilized as the feature vector in the proposed system.Lastly,hyper-parameters are used to fine-tune DNN to calculate the class labels’probability.The suggested system outperforms more modern methodologies with an accuracy of 91%,precision of 89%,recall of 93%and F1-score of 91%.The HPTDL-MPDC has the primary application in detecting the parasite of malaria in the smear images of human peripheral blood.展开更多
Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as t...Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as the sample size goes to infinity.In this paper,we consider one commonly used hyper-parameter estimator,the empirical Bayes(EB).Its convergence in distribution has been studied,and the explicit expression of the covariance matrix of its limiting distribution has been given.However,what we are truly interested in are factors contained in the covariance matrix of the EB hyper-parameter estimator,and then,the convergence of its covariance matrix to that of its limiting distribution is required.In general,the convergence in distribution of a sequence of random variables does not necessarily guarantee the convergence of its covariance matrix.Thus,the derivation of such convergence is a necessary complement to our theoretical analysis about factors that influence the convergence properties of the EB hyper-parameter estimator.In this paper,we consider the regularized finite impulse response(FIR)model estimation with deterministic inputs,and show that the covariance matrix of the EB hyper-parameter estimator converges to that of its limiting distribution.Moreover,we run numerical simulations to demonstrate the efficacy of ourtheoretical results.展开更多
Machine learning models may outperform traditional statistical regression algorithms for predicting clinical outcomes.Proper validation of building such models and tuning their underlying algorithms is necessary to av...Machine learning models may outperform traditional statistical regression algorithms for predicting clinical outcomes.Proper validation of building such models and tuning their underlying algorithms is necessary to avoid over-fitting and poor generalizability,which smaller datasets can be more prone to.In an effort to educate readers interested in artificial intelligence and model-building based on machine-learning algorithms,we outline important details on crossvalidation techniques that can enhance the performance and generalizability of such models.展开更多
The technology for beneficiation of banded iron ores containing low iron value is a challenging task due to increasing demand of quality iron ore in India. A flotation process has been developed to treat one such ore,...The technology for beneficiation of banded iron ores containing low iron value is a challenging task due to increasing demand of quality iron ore in India. A flotation process has been developed to treat one such ore, namely banded hematite quartzite (BHQ) containing 41.8wt% Fe and 41.5wt% SiO2,by using oleic acid, methyl isobutyl carbinol (MIBC), and sodium silicate as the collector, frother, and dispersant, respectively. The relative effects of these variables have been evaluated in half-normal plots and Pareto charts using central composite rotatable design. A quadratic response model has been developed for both Fe grade and recovery and optimized within the experimental range. The optimum reagent dosages are found to be as follows: collector concentration of 243.58 g/t, dispersant concentration of 195.67 g/t, pH 8.69, and conditioning time of 4.8 min to achieve the maximum Fe grade of 64.25% with 67.33% recovery. The predictions of the model with regard to iron grade and recovery are in good agreement with the experimental results.展开更多
Due to the fact that the vibration signal of the rotating machine is one-dimensional and the large-scale convolution kernel can obtain a better perception field, on the basis of the classical convolution neural networ...Due to the fact that the vibration signal of the rotating machine is one-dimensional and the large-scale convolution kernel can obtain a better perception field, on the basis of the classical convolution neural network model(LetNet-5), one-dimensional large-kernel convolution neural network(1 DLCNN) is designed. Since the hyper-parameters of 1 DLCNN have a greater impact on network performance, the genetic algorithm(GA) is used to optimize the hyper-parameters, and the method of optimizing the parameters of 1 DLCNN by the genetic algorithm is named GA-1 DLCNN. The experimental results show that the optimal network model based on the GA-1 DLCNN method can achieve 99.9% fault diagnosis accuracy, which is much higher than those of other traditional fault diagnosis methods. In addition, the 1 DLCNN is compared with one-dimencional small-kernel convolution neural network(1 DSCNN) and the classical two-dimensional convolution neural network model. The input sample lengths are set to be 128, 256, 512, 1 024, and 2 048, respectively, and the final diagnostic accuracy results and the visual scatter plot show that the effect of 1 DLCNN is optimal.展开更多
Evolutionary algorithm is time-consuming because of the large number of evolutions and much times of finite element analysis, when it is used to optimize the wing structure of a certain high altitude long endurance un...Evolutionary algorithm is time-consuming because of the large number of evolutions and much times of finite element analysis, when it is used to optimize the wing structure of a certain high altitude long endurance unmanned aviation vehicle(UAV). In order to improve efficiency it is proposed to construct a model management framework to perform the multi-objective optimization design of wing structure. The sufficient accurate approximation models of objective and constraint functions in the wing structure optimization model are built when using the model management framework, therefore in the evolutionary algorithm a number of finite element analyses can he avoided and the satisfactory multi-objective optimization results of the wing structure of the high altitude long endurance UAV are obtained.展开更多
In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of u...In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of users transmitting status updates,which are collected by the user randomly over time,to an edge server through unreliable orthogonal channels.It begs a natural question:with random status update arrivals and obscure channel conditions,can we devise an intelligent scheduling policy that matches the users and channels to stabilize the queues of all users while minimizing the average AoI?To give an adequate answer,we define a bipartite graph and formulate a dynamic edge activation problem with stability constraints.Then,we propose an online matching while learning algorithm(MatL)and discuss its implementation for wireless scheduling.Finally,simulation results demonstrate that the MatL is reliable to learn the channel states and manage the users’buffers for fresher information at the edge.展开更多
Gaussian process(GP)has fewer parameters,simple model and output of probabilistic sense,when compared with the methods such as support vector machines.Selection of the hyper-parameters is critical to the performance o...Gaussian process(GP)has fewer parameters,simple model and output of probabilistic sense,when compared with the methods such as support vector machines.Selection of the hyper-parameters is critical to the performance of Gaussian process model.However,the common-used algorithm has the disadvantages of difficult determination of iteration steps,over-dependence of optimization effect on initial values,and easily falling into local optimum.To solve this problem,a method combining the Gaussian process with memetic algorithm was proposed.Based on this method,memetic algorithm was used to search the optimal hyper parameters of Gaussian process regression(GPR)model in the training process and form MA-GPR algorithms,and then the model was used to predict and test the results.When used in the marine long-range precision strike system(LPSS)battle effectiveness evaluation,the proposed MA-GPR model significantly improved the prediction accuracy,compared with the conjugate gradient method and the genetic algorithm optimization process.展开更多
Geological structures often exhibit smooth characteristics away from sharp discontinuities. One aim of geophysical inversion is to recover information about the smooth structures as well as about the sharp discontinui...Geological structures often exhibit smooth characteristics away from sharp discontinuities. One aim of geophysical inversion is to recover information about the smooth structures as well as about the sharp discontinuities. Because no specific operator can provide a perfect sparse representation of complicated geological models, hyper-parameter regularization inversion based on the iterative split Bregman method was used to recover the features of both smooth and sharp geological structures. A novel preconditioned matrix was proposed, which counteracted the natural decay of the sensitivity matrix and its inverse matrix was calculated easily. Application of the algorithm to synthetic data produces density models that are good representations of the designed models. The results show that the algorithm proposed is feasible and effective.展开更多
The lowest order Pl-nonconforming triangular finite element method (FEM) for elliptic and parabolic interface problems is investigated. Under some reasonable regularity assumptions on the exact solutions, the optima...The lowest order Pl-nonconforming triangular finite element method (FEM) for elliptic and parabolic interface problems is investigated. Under some reasonable regularity assumptions on the exact solutions, the optimal order error estimates are obtained in the broken energy norm. Finally, some numerical results are provided to verify the theoretical analysis.展开更多
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ...Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.展开更多
A Bayesian estimator with informative prior distributions (a multi-normal and an inverted gamma distribution), adequate to displacement estimation at dam displacement monitoring networks, is presented. The hyper-par...A Bayesian estimator with informative prior distributions (a multi-normal and an inverted gamma distribution), adequate to displacement estimation at dam displacement monitoring networks, is presented. The hyper-parameters of the prior distributions are obtained by Bayesian empirical methods with non-informative meta-priors. The performances of the Bayes estimator and the classical generalized lest squares estimator are compared using two measurements of the horizontal monitoring network of a concrete gravity dam: the Penha Garcia dam (Portugal). In order to test the robustness of the two estimators, a gross error is added to one of the measured horizontal directions: the Bayes estimator proves to be significantly more robust than the classic maximum likelihood estimator.展开更多
In this paper we study the problem of model selection for a linear programming-based support vector machine for regression. We propose generalized method that is based on a quasi-Newton method that uses a globalizatio...In this paper we study the problem of model selection for a linear programming-based support vector machine for regression. We propose generalized method that is based on a quasi-Newton method that uses a globalization strategy and an inexact computation of first order information. We explore the case of two-class, multi-class, and regression problems. Simulation results among standard datasets suggest that the algorithm achieves insignificant variability when measuring residual statistical properties.展开更多
A scheme to enhance near-infrared band absorption of a Si nanoparticle by placing the Si nanoparticle into a designed gold nanostructure is proposed. Three-dimensional (3D) finite-difference time-domain simulations ...A scheme to enhance near-infrared band absorption of a Si nanoparticle by placing the Si nanoparticle into a designed gold nanostructure is proposed. Three-dimensional (3D) finite-difference time-domain simulations are employed to calcu- late the absorption spectrum of the Si nanostructure and maximize it by generating alternate designs. The results show that in the near-infrared region over 700 nm, the absorption of a pure Si nanoparticle is very low, but when the same nanoparticle is placed within an optimally designed gold nanostructure, its absorption cross section can be enhanced by more than two orders of magnitude in the near-infrared band.展开更多
In computer vision,convolutional neural networks have a wide range of uses.Images representmost of today’s data,so it’s important to know how to handle these large amounts of data efficiently.Convolutional neural ne...In computer vision,convolutional neural networks have a wide range of uses.Images representmost of today’s data,so it’s important to know how to handle these large amounts of data efficiently.Convolutional neural networks have been shown to solve image processing problems effectively.However,when designing the network structure for a particular problem,you need to adjust the hyperparameters for higher accuracy.This technique is time consuming and requires a lot of work and domain knowledge.Designing a convolutional neural network architecture is a classic NP-hard optimization challenge.On the other hand,different datasets require different combinations of models or hyperparameters,which can be time consuming and inconvenient.Various approaches have been proposed to overcome this problem,such as grid search limited to low-dimensional space and queuing by random selection.To address this issue,we propose an evolutionary algorithm-based approach that dynamically enhances the structure of Convolution Neural Networks(CNNs)using optimized hyperparameters.This study proposes a method using Non-dominated sorted genetic algorithms(NSGA)to improve the hyperparameters of the CNN model.In addition,different types and parameter ranges of existing genetic algorithms are used.Acomparative study was conducted with various state-of-the-art methodologies and algorithms.Experiments have shown that our proposed approach is superior to previous methods in terms of classification accuracy,and the results are published in modern computing literature.展开更多
The number of studies in the literature that diagnose cancer with machine learning using genome data is quite limited.These studies focus on the prediction performance,and the extraction of genomic factors that cause ...The number of studies in the literature that diagnose cancer with machine learning using genome data is quite limited.These studies focus on the prediction performance,and the extraction of genomic factors that cause disease is often overlooked.However,finding underlying genetic causes is very important in terms of early diagnosis,development of diagnostic kits,preventive medicine,etc.The motivation of our study was to diagnose bladder cancer(BCa)based on genetic data and to reveal underlying genetic factors by using machine-learning models.In addition,conducting hyper-parameter optimization to get the best performance from different models,which is overlooked in most studies,was another objective of the study.Within the framework of these motivations,C4.5,random forest(RF),artificial neural networks(ANN),and deep learning(DL)were used.In this way,the diagnostic performance of decision tree(DT)-based models and black box models on BCa was also compared.The most successful model,DL,yielded an area under the curve(AUC)of 0.985 and a mean square error(MSE)of 0.069.For each model,hyper-parameters were optimized by an evolutionary algorithm.On average,hyper-parameter optimization increased MSE,root mean square error(RMSE),LogLoss,and AUC by 30%,17.5%,13%,and 6.75%,respectively.The features causing BCa were extracted.For this purpose,entropy and Gini coefficients were used for DT-based methods,and the Gedeon variable importance was used for black box methods.The single nucleotide polymorphisms(SNPs)rs197412,rs2275928,rs12479919,rs798766 and rs2275928,whose BCa relations were proven in the literature,were found to be closely related to BCa.In addition,rs1994624 and rs2241766 susceptibility loci were proposed to be examined in future studies.展开更多
We present an optimal control model of three stages of resources allocation for managing invasive species. Three types of temporal uncertainty are considered, involving the timing of discovery of an invasive pest, the...We present an optimal control model of three stages of resources allocation for managing invasive species. Three types of temporal uncertainty are considered, involving the timing of discovery of an invasive pest, the timing of an induced technology development after the establishment and dispersion of an invasive species, and the timing of farmer adoption of induced technology as the costs of controlling the invasive species increase. Using a bioeconomic optimal control model of managing invasive species, where models in previous studies are subset within our model, we show that when sub-structured models not including all three stages are used for managing invasive species, resource allocation for adopting preventive measures before the initial discovery of an invasive pest would be supra-optimalwhile resource allocation for adopting conventional control measures after establishment and dispersion would be sub-optimal.展开更多
This paper investigates the use of the method of inequalities (MoI) to design output-feedback compensators for the problem of the control of instabilities in a laminar plane Poiseuille flow. In common with many flow...This paper investigates the use of the method of inequalities (MoI) to design output-feedback compensators for the problem of the control of instabilities in a laminar plane Poiseuille flow. In common with many flows, the dynamics of streamwise vortices in plane Poiseuille flow are very non-normal. Consequently, small perturbations grow rapidly with a large transient that may trigger nonlinearities and lead to turbulence even though such perturbations would, in a linear flow model, eventually decay. Such a system can be described as a conditionally linear system. The sensitivity is measured using the maximum transient energy growth, which is widely used in the fluid dynamics community. The paper considers two approaches. In the first approach, the MoI is used to design low-order proportional and proportional-integral (PI) controllers. In the second one, the MoI is combined with McFarlane and Glover's H∞ loop-shaping design procedure in a mixed-optimization approach.展开更多
Alzheimer’s disease(AD)is an intensifying disorder that causes brain cells to degenerate early and destruct.Mild cognitive impairment(MCI)is one of the early signs of AD that interferes with people’s regular functio...Alzheimer’s disease(AD)is an intensifying disorder that causes brain cells to degenerate early and destruct.Mild cognitive impairment(MCI)is one of the early signs of AD that interferes with people’s regular functioning and daily activities.The proposed work includes a deep learning approach with a multimodal recurrent neural network(RNN)to predict whether MCI leads to Alzheimer’s or not.The gated recurrent unit(GRU)RNN classifier is trained using individual and correlated features.Feature vectors are concate-nated based on their correlation strength to improve prediction results.The feature vectors generated are given as the input to multiple different classifiers,whose decision function is used to predict the final output,which determines whether MCI progresses onto AD or not.Our findings demonstrated that,compared to individual modalities,which provided an average accuracy of 75%,our prediction model for MCI conversion to AD yielded an improve-ment in accuracy up to 96%when used with multiple concatenated modalities.Comparing the accuracy of different decision functions,such as Support Vec-tor Machine(SVM),Decision tree,Random Forest,and Ensemble techniques,it was found that that the Ensemble approach provided the highest accuracy(96%)and Decision tree provided the lowest accuracy(86%).展开更多
基金financial support provided by the National Key Research and Development Project(2019YFC0214403)Chongqing Joint Chinese Medicine Scientific Research Project(2021ZY023984)。
文摘Due to outstanding performance in cheminformatics,machine learning algorithms have been increasingly used to mine molecular properties and biomedical big data.The performance of machine learning models is known to critically depend on the selection of the hyper-parameter configuration.However,many studies either explored the optimal hyper-parameters per the grid searching method or employed arbitrarily selected hyper-parameters,which can easily lead to achieving a suboptimal hyper-parameter configuration.In this study,Hyperopt library embedding with the Bayesian optimization is employed to find optimal hyper-parameters for different machine learning algorithms.Six drug discovery datasets,including solubility,probe-likeness,h ERG,Chagas disease,tuberculosis,and malaria,are used to compare different machine learning algorithms with ECFP6 fingerprints.This contribution aims to evaluate whether the Bernoulli Na?ve Bayes,logistic linear regression,Ada Boost decision tree,random forest,support vector machine,and deep neural networks algorithms with optimized hyper-parameters can offer any improvement in testing as compared with the referenced models assessed by an array of metrics including AUC,F1-score,Cohen’s kappa,Matthews correlation coefficient,recall,precision,and accuracy.Based on the rank normalized score approach,the Hyperopt models achieve better or comparable performance on 33 out 36 models for different drug discovery datasets,showing significant improvement achieved by employing the Hyperopt library.The open-source code of all the 6 machine learning frameworks employed in the Hyperopt python package is provided to make this approach accessible to more scientists,who are not familiar with writing code.
文摘A theoretical methodology is suggested for finding the malaria parasites’presence with the help of an intelligent hyper-parameter tuned Deep Learning(DL)based malaria parasite detection and classification(HPTDL-MPDC)in the smear images of human peripheral blood.Some existing approaches fail to predict the malaria parasitic features and reduce the prediction accuracy.The trained model initiated in the proposed system for classifying peripheral blood smear images into the non-parasite or parasite classes using the available online dataset.The Adagrad optimizer is stacked with the suggested pre-trained Deep Neural Network(DNN)with the help of the contrastive divergence method to pre-train.The features are extracted from the images in the proposed system to train the DNN for initializing the visible variables.The smear images show the concatenated feature to be utilized as the feature vector in the proposed system.Lastly,hyper-parameters are used to fine-tune DNN to calculate the class labels’probability.The suggested system outperforms more modern methodologies with an accuracy of 91%,precision of 89%,recall of 93%and F1-score of 91%.The HPTDL-MPDC has the primary application in detecting the parasite of malaria in the smear images of human peripheral blood.
基金supported in part by the National Natural Science Foundation of China(No.62273287)by the Shenzhen Science and Technology Innovation Council(Nos.JCYJ20220530143418040,JCY20170411102101881)the Thousand Youth Talents Plan funded by the central government of China.
文摘Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as the sample size goes to infinity.In this paper,we consider one commonly used hyper-parameter estimator,the empirical Bayes(EB).Its convergence in distribution has been studied,and the explicit expression of the covariance matrix of its limiting distribution has been given.However,what we are truly interested in are factors contained in the covariance matrix of the EB hyper-parameter estimator,and then,the convergence of its covariance matrix to that of its limiting distribution is required.In general,the convergence in distribution of a sequence of random variables does not necessarily guarantee the convergence of its covariance matrix.Thus,the derivation of such convergence is a necessary complement to our theoretical analysis about factors that influence the convergence properties of the EB hyper-parameter estimator.In this paper,we consider the regularized finite impulse response(FIR)model estimation with deterministic inputs,and show that the covariance matrix of the EB hyper-parameter estimator converges to that of its limiting distribution.Moreover,we run numerical simulations to demonstrate the efficacy of ourtheoretical results.
文摘Machine learning models may outperform traditional statistical regression algorithms for predicting clinical outcomes.Proper validation of building such models and tuning their underlying algorithms is necessary to avoid over-fitting and poor generalizability,which smaller datasets can be more prone to.In an effort to educate readers interested in artificial intelligence and model-building based on machine-learning algorithms,we outline important details on crossvalidation techniques that can enhance the performance and generalizability of such models.
文摘The technology for beneficiation of banded iron ores containing low iron value is a challenging task due to increasing demand of quality iron ore in India. A flotation process has been developed to treat one such ore, namely banded hematite quartzite (BHQ) containing 41.8wt% Fe and 41.5wt% SiO2,by using oleic acid, methyl isobutyl carbinol (MIBC), and sodium silicate as the collector, frother, and dispersant, respectively. The relative effects of these variables have been evaluated in half-normal plots and Pareto charts using central composite rotatable design. A quadratic response model has been developed for both Fe grade and recovery and optimized within the experimental range. The optimum reagent dosages are found to be as follows: collector concentration of 243.58 g/t, dispersant concentration of 195.67 g/t, pH 8.69, and conditioning time of 4.8 min to achieve the maximum Fe grade of 64.25% with 67.33% recovery. The predictions of the model with regard to iron grade and recovery are in good agreement with the experimental results.
基金The National Natural Science Foundation of China(No.51675098)
文摘Due to the fact that the vibration signal of the rotating machine is one-dimensional and the large-scale convolution kernel can obtain a better perception field, on the basis of the classical convolution neural network model(LetNet-5), one-dimensional large-kernel convolution neural network(1 DLCNN) is designed. Since the hyper-parameters of 1 DLCNN have a greater impact on network performance, the genetic algorithm(GA) is used to optimize the hyper-parameters, and the method of optimizing the parameters of 1 DLCNN by the genetic algorithm is named GA-1 DLCNN. The experimental results show that the optimal network model based on the GA-1 DLCNN method can achieve 99.9% fault diagnosis accuracy, which is much higher than those of other traditional fault diagnosis methods. In addition, the 1 DLCNN is compared with one-dimencional small-kernel convolution neural network(1 DSCNN) and the classical two-dimensional convolution neural network model. The input sample lengths are set to be 128, 256, 512, 1 024, and 2 048, respectively, and the final diagnostic accuracy results and the visual scatter plot show that the effect of 1 DLCNN is optimal.
文摘Evolutionary algorithm is time-consuming because of the large number of evolutions and much times of finite element analysis, when it is used to optimize the wing structure of a certain high altitude long endurance unmanned aviation vehicle(UAV). In order to improve efficiency it is proposed to construct a model management framework to perform the multi-objective optimization design of wing structure. The sufficient accurate approximation models of objective and constraint functions in the wing structure optimization model are built when using the model management framework, therefore in the evolutionary algorithm a number of finite element analyses can he avoided and the satisfactory multi-objective optimization results of the wing structure of the high altitude long endurance UAV are obtained.
基金supported in part by Shanghai Pujiang Program under Grant No.21PJ1402600in part by Natural Science Foundation of Chongqing,China under Grant No.CSTB2022NSCQ-MSX0375+4 种基金in part by Song Shan Laboratory Foundation,under Grant No.YYJC022022007in part by Zhejiang Provincial Natural Science Foundation of China under Grant LGJ22F010001in part by National Key Research and Development Program of China under Grant 2020YFA0711301in part by National Natural Science Foundation of China under Grant 61922049。
文摘In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of users transmitting status updates,which are collected by the user randomly over time,to an edge server through unreliable orthogonal channels.It begs a natural question:with random status update arrivals and obscure channel conditions,can we devise an intelligent scheduling policy that matches the users and channels to stabilize the queues of all users while minimizing the average AoI?To give an adequate answer,we define a bipartite graph and formulate a dynamic edge activation problem with stability constraints.Then,we propose an online matching while learning algorithm(MatL)and discuss its implementation for wireless scheduling.Finally,simulation results demonstrate that the MatL is reliable to learn the channel states and manage the users’buffers for fresher information at the edge.
基金Project(513300303)supported by the General Armament Department,China
文摘Gaussian process(GP)has fewer parameters,simple model and output of probabilistic sense,when compared with the methods such as support vector machines.Selection of the hyper-parameters is critical to the performance of Gaussian process model.However,the common-used algorithm has the disadvantages of difficult determination of iteration steps,over-dependence of optimization effect on initial values,and easily falling into local optimum.To solve this problem,a method combining the Gaussian process with memetic algorithm was proposed.Based on this method,memetic algorithm was used to search the optimal hyper parameters of Gaussian process regression(GPR)model in the training process and form MA-GPR algorithms,and then the model was used to predict and test the results.When used in the marine long-range precision strike system(LPSS)battle effectiveness evaluation,the proposed MA-GPR model significantly improved the prediction accuracy,compared with the conjugate gradient method and the genetic algorithm optimization process.
基金Projects(41174061,41374120)supported by the National Natural Science Foundation of China
文摘Geological structures often exhibit smooth characteristics away from sharp discontinuities. One aim of geophysical inversion is to recover information about the smooth structures as well as about the sharp discontinuities. Because no specific operator can provide a perfect sparse representation of complicated geological models, hyper-parameter regularization inversion based on the iterative split Bregman method was used to recover the features of both smooth and sharp geological structures. A novel preconditioned matrix was proposed, which counteracted the natural decay of the sensitivity matrix and its inverse matrix was calculated easily. Application of the algorithm to synthetic data produces density models that are good representations of the designed models. The results show that the algorithm proposed is feasible and effective.
基金Project supported by the National Natural Science Foundation of China(No.11271340)
文摘The lowest order Pl-nonconforming triangular finite element method (FEM) for elliptic and parabolic interface problems is investigated. Under some reasonable regularity assumptions on the exact solutions, the optimal order error estimates are obtained in the broken energy norm. Finally, some numerical results are provided to verify the theoretical analysis.
文摘Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.
文摘A Bayesian estimator with informative prior distributions (a multi-normal and an inverted gamma distribution), adequate to displacement estimation at dam displacement monitoring networks, is presented. The hyper-parameters of the prior distributions are obtained by Bayesian empirical methods with non-informative meta-priors. The performances of the Bayes estimator and the classical generalized lest squares estimator are compared using two measurements of the horizontal monitoring network of a concrete gravity dam: the Penha Garcia dam (Portugal). In order to test the robustness of the two estimators, a gross error is added to one of the measured horizontal directions: the Bayes estimator proves to be significantly more robust than the classic maximum likelihood estimator.
文摘In this paper we study the problem of model selection for a linear programming-based support vector machine for regression. We propose generalized method that is based on a quasi-Newton method that uses a globalization strategy and an inexact computation of first order information. We explore the case of two-class, multi-class, and regression problems. Simulation results among standard datasets suggest that the algorithm achieves insignificant variability when measuring residual statistical properties.
基金Project supported by the National Key Basic Research and Development Program of China(Grant No.2013CB632704)the Knowledge Innovation Program of the Chinese Academy of Sciences(Grant No.Y1 V2013L11)
文摘A scheme to enhance near-infrared band absorption of a Si nanoparticle by placing the Si nanoparticle into a designed gold nanostructure is proposed. Three-dimensional (3D) finite-difference time-domain simulations are employed to calcu- late the absorption spectrum of the Si nanostructure and maximize it by generating alternate designs. The results show that in the near-infrared region over 700 nm, the absorption of a pure Si nanoparticle is very low, but when the same nanoparticle is placed within an optimally designed gold nanostructure, its absorption cross section can be enhanced by more than two orders of magnitude in the near-infrared band.
基金This research was supported by the Researchers Supporting Program(TUMAProject-2021-27)Almaarefa University,Riyadh,Saudi Arabia.
文摘In computer vision,convolutional neural networks have a wide range of uses.Images representmost of today’s data,so it’s important to know how to handle these large amounts of data efficiently.Convolutional neural networks have been shown to solve image processing problems effectively.However,when designing the network structure for a particular problem,you need to adjust the hyperparameters for higher accuracy.This technique is time consuming and requires a lot of work and domain knowledge.Designing a convolutional neural network architecture is a classic NP-hard optimization challenge.On the other hand,different datasets require different combinations of models or hyperparameters,which can be time consuming and inconvenient.Various approaches have been proposed to overcome this problem,such as grid search limited to low-dimensional space and queuing by random selection.To address this issue,we propose an evolutionary algorithm-based approach that dynamically enhances the structure of Convolution Neural Networks(CNNs)using optimized hyperparameters.This study proposes a method using Non-dominated sorted genetic algorithms(NSGA)to improve the hyperparameters of the CNN model.In addition,different types and parameter ranges of existing genetic algorithms are used.Acomparative study was conducted with various state-of-the-art methodologies and algorithms.Experiments have shown that our proposed approach is superior to previous methods in terms of classification accuracy,and the results are published in modern computing literature.
文摘The number of studies in the literature that diagnose cancer with machine learning using genome data is quite limited.These studies focus on the prediction performance,and the extraction of genomic factors that cause disease is often overlooked.However,finding underlying genetic causes is very important in terms of early diagnosis,development of diagnostic kits,preventive medicine,etc.The motivation of our study was to diagnose bladder cancer(BCa)based on genetic data and to reveal underlying genetic factors by using machine-learning models.In addition,conducting hyper-parameter optimization to get the best performance from different models,which is overlooked in most studies,was another objective of the study.Within the framework of these motivations,C4.5,random forest(RF),artificial neural networks(ANN),and deep learning(DL)were used.In this way,the diagnostic performance of decision tree(DT)-based models and black box models on BCa was also compared.The most successful model,DL,yielded an area under the curve(AUC)of 0.985 and a mean square error(MSE)of 0.069.For each model,hyper-parameters were optimized by an evolutionary algorithm.On average,hyper-parameter optimization increased MSE,root mean square error(RMSE),LogLoss,and AUC by 30%,17.5%,13%,and 6.75%,respectively.The features causing BCa were extracted.For this purpose,entropy and Gini coefficients were used for DT-based methods,and the Gedeon variable importance was used for black box methods.The single nucleotide polymorphisms(SNPs)rs197412,rs2275928,rs12479919,rs798766 and rs2275928,whose BCa relations were proven in the literature,were found to be closely related to BCa.In addition,rs1994624 and rs2241766 susceptibility loci were proposed to be examined in future studies.
文摘We present an optimal control model of three stages of resources allocation for managing invasive species. Three types of temporal uncertainty are considered, involving the timing of discovery of an invasive pest, the timing of an induced technology development after the establishment and dispersion of an invasive species, and the timing of farmer adoption of induced technology as the costs of controlling the invasive species increase. Using a bioeconomic optimal control model of managing invasive species, where models in previous studies are subset within our model, we show that when sub-structured models not including all three stages are used for managing invasive species, resource allocation for adopting preventive measures before the initial discovery of an invasive pest would be supra-optimalwhile resource allocation for adopting conventional control measures after establishment and dispersion would be sub-optimal.
文摘This paper investigates the use of the method of inequalities (MoI) to design output-feedback compensators for the problem of the control of instabilities in a laminar plane Poiseuille flow. In common with many flows, the dynamics of streamwise vortices in plane Poiseuille flow are very non-normal. Consequently, small perturbations grow rapidly with a large transient that may trigger nonlinearities and lead to turbulence even though such perturbations would, in a linear flow model, eventually decay. Such a system can be described as a conditionally linear system. The sensitivity is measured using the maximum transient energy growth, which is widely used in the fluid dynamics community. The paper considers two approaches. In the first approach, the MoI is used to design low-order proportional and proportional-integral (PI) controllers. In the second one, the MoI is combined with McFarlane and Glover's H∞ loop-shaping design procedure in a mixed-optimization approach.
文摘Alzheimer’s disease(AD)is an intensifying disorder that causes brain cells to degenerate early and destruct.Mild cognitive impairment(MCI)is one of the early signs of AD that interferes with people’s regular functioning and daily activities.The proposed work includes a deep learning approach with a multimodal recurrent neural network(RNN)to predict whether MCI leads to Alzheimer’s or not.The gated recurrent unit(GRU)RNN classifier is trained using individual and correlated features.Feature vectors are concate-nated based on their correlation strength to improve prediction results.The feature vectors generated are given as the input to multiple different classifiers,whose decision function is used to predict the final output,which determines whether MCI progresses onto AD or not.Our findings demonstrated that,compared to individual modalities,which provided an average accuracy of 75%,our prediction model for MCI conversion to AD yielded an improve-ment in accuracy up to 96%when used with multiple concatenated modalities.Comparing the accuracy of different decision functions,such as Support Vec-tor Machine(SVM),Decision tree,Random Forest,and Ensemble techniques,it was found that that the Ensemble approach provided the highest accuracy(96%)and Decision tree provided the lowest accuracy(86%).