Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Co...Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.展开更多
The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important prac...The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important practical significance.In this work,machine learning(ML)methods were utilized to accelerate the search for shape memory alloys with targeted properties(phase transition temperature).A group of component data was selected to design shape memory alloys using reverse design method from numerous unexplored data.Component modeling and feature modeling were used to predict the phase transition temperature of the shape memory alloys.The experimental results of the shape memory alloys were obtained to verify the effectiveness of the support vector regression(SVR)model.The results show that the machine learning model can obtain target materials more efficiently and pertinently,and realize the accurate and rapid design of shape memory alloys with specific target phase transition temperature.On this basis,the relationship between phase transition temperature and material descriptors is analyzed,and it is proved that the key factors affecting the phase transition temperature of shape memory alloys are based on the strength of the bond energy between atoms.This work provides new ideas for the controllable design and performance optimization of Cu-based shape memory alloys.展开更多
The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss i...The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss is not differentiable,and the LASSO penalty does not have the Oracle property.In this paper,the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property,contributing to higher accuracy than traditional SVMs.It is experimentally demonstrated that the two non-convex huberized-SVM methods,smoothly clipped absolute deviation huberized-SVM(SCAD-HSVM)and minimax concave penalty huberized-SVM(MCP-HSVM),outperform the traditional SVM method in terms of the prediction accuracy and classifier performance.They are also superior in terms of variable selection,especially when there is a high linear correlation between the variables.When they are applied to the prediction of listed companies,the variables that can affect and predict financial distress are accurately filtered out.Among all the indicators,the indicators per share have the greatest influence while those of solvency have the weakest influence.Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision.展开更多
Landslide is a serious natural disaster next only to earthquake and flood,which will cause a great threat to people’s lives and property safety.The traditional research of landslide disaster based on experience-drive...Landslide is a serious natural disaster next only to earthquake and flood,which will cause a great threat to people’s lives and property safety.The traditional research of landslide disaster based on experience-driven or statistical model and its assessment results are subjective,difficult to quantify,and no pertinence.As a new research method for landslide susceptibility assessment,machine learning can greatly improve the landslide susceptibility model’s accuracy by constructing statistical models.Taking Western Henan for example,the study selected 16 landslide influencing factors such as topography,geological environment,hydrological conditions,and human activities,and 11 landslide factors with the most significant influence on the landslide were selected by the recursive feature elimination(RFE)method.Five machine learning methods[Support Vector Machines(SVM),Logistic Regression(LR),Random Forest(RF),Extreme Gradient Boosting(XGBoost),and Linear Discriminant Analysis(LDA)]were used to construct the spatial distribution model of landslide susceptibility.The models were evaluated by the receiver operating characteristic curve and statistical index.After analysis and comparison,the XGBoost model(AUC 0.8759)performed the best and was suitable for dealing with regression problems.The model had a high adaptability to landslide data.According to the landslide susceptibility map of the five models,the overall distribution can be observed.The extremely high and high susceptibility areas are distributed in the Funiu Mountain range in the southwest,the Xiaoshan Mountain range in the west,and the Yellow River Basin in the north.These areas have large terrain fluctuations,complicated geological structural environments and frequent human engineering activities.The extremely high and highly prone areas were 12043.3 km^(2)and 3087.45 km^(2),accounting for 47.61%and 12.20%of the total area of the study area,respectively.Our study reflects the distribution of landslide susceptibility in western Henan Province,which provides a scientific basis for regional disaster warning,prediction,and resource protection.The study has important practical significance for subsequent landslide disaster management.展开更多
Classical machine learning, which is at the intersection of artificial intelligence and statistics, investigates and formulates algorithms which can be used to discover patterns in the given data and also make some fo...Classical machine learning, which is at the intersection of artificial intelligence and statistics, investigates and formulates algorithms which can be used to discover patterns in the given data and also make some forecasts based on the given data. Classical machine learning has its quantum part, which is known as quantum machine learning (QML). QML, which is a field of quantum computing, uses some of the quantum mechanical principles and concepts which include superposition, entanglement and quantum adiabatic theorem to assess the data and make some forecasts based on the data. At the present moment, research in QML has taken two main approaches. The first approach involves implementing the computationally expensive subroutines of classical machine learning algorithms on a quantum computer. The second approach concerns using classical machine learning algorithms on a quantum information, to speed up performance of the algorithms. The work presented in this manuscript proposes a quantum support vector algorithm that can be used to forecast solar irradiation. The novelty of this work is in using quantum mechanical principles for application in machine learning. Python programming language was used to simulate the performance of the proposed algorithm on a classical computer. Simulation results that were obtained show the usefulness of this algorithm for predicting solar irradiation.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
The total organic carbon content usually determines the hydrocarbon generation potential of a formation.A higher total organic carbon content often corresponds to a greater possibility of generating large amounts of o...The total organic carbon content usually determines the hydrocarbon generation potential of a formation.A higher total organic carbon content often corresponds to a greater possibility of generating large amounts of oil or gas.Hence,accurately calculating the total organic carbon content in a formation is very important.Present research is focused on precisely calculating the total organic carbon content based on machine learning.At present,many machine learning methods,including backpropagation neural networks,support vector regression,random forests,extreme learning machines,and deep learning,are employed to evaluate the total organic carbon content.However,the principles and perspectives of various machine learning algorithms are quite different.This paper reviews the application of various machine learning algorithms to deal with total organic carbon content evaluation problems.Of various machine learning algorithms used for TOC content predication,two algorithms,the backpropagation neural network and support vector regression are the most commonly used,and the backpropagation neural network is sometimes combined with many other algorithms to achieve better results.Additionally,combining multiple algorithms or using deep learning to increase the number of network layers can further improve the total organic carbon content prediction.The prediction by backpropagation neural network may be better than that by support vector regression;nevertheless,using any type of machine learning algorithm improves the total organic carbon content prediction in a given research block.According to some published literature,the determination coefficient(R^(2))can be increased by up to 0.46 after using machine learning.Deep learning algorithms may be the next breakthrough direction that can significantly improve the prediction of the total organic carbon content.Evaluating the total organic carbon content based on machine learning is of great significance.展开更多
Path loss prediction models are vital for accurate signal propagation in wireless channels. Empirical and deterministic models used in path loss predictions have not produced optimal results. In this paper, we introdu...Path loss prediction models are vital for accurate signal propagation in wireless channels. Empirical and deterministic models used in path loss predictions have not produced optimal results. In this paper, we introduced machine learning algorithms to path loss predictions because it offers a flexible network architecture and extensive data can be used. We introduced support vector regression (SVR) and radial basis function (RBF) models to path loss predictions in the investigated environments. The SVR model was able to process several input parameters without introducing complexity to the network architecture. The RBF on its part provides a good function approximation. Hyperparameter tuning of the machine learning models was carried out in order to achieve optimal results. The performances of the SVR and RBF models were compared and result validated using the root-mean squared error (RMSE). The two machine learning algorithms were also compared with the Cost-231, SUI, Egli, Freespace, Cost-231 W-I models. The analytical models overpredicted path loss. Overall, the machine learning models predicted path loss with greater accuracy than the empirical models. The SVR model performed best across all the indices with RMSE values of 1.378 dB, 1.4523 dB, 2.1568 dB in rural, suburban and urban settings respectively and should therefore be adopted for signal propagation in the investigated environments and beyond.展开更多
This paper presents a new algorithm for Support Vector Machine (SVM) training, which trains a machine based on the cluster centers of errors caused by the current machine. Experiments with various training sets show t...This paper presents a new algorithm for Support Vector Machine (SVM) training, which trains a machine based on the cluster centers of errors caused by the current machine. Experiments with various training sets show that the computation time of this new algorithm scales almost linear with training set size and thus may be applied to much larger training sets, in comparison to standard quadratic programming (QP) techniques.展开更多
Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learni...Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.展开更多
An inverse learning control scheme using the support vector machine (SVM) for regression was proposed. The inverse learning approach is originally researched in the neural networks. Compared with neural networks, SVMs...An inverse learning control scheme using the support vector machine (SVM) for regression was proposed. The inverse learning approach is originally researched in the neural networks. Compared with neural networks, SVMs overcome the problems of local minimum and curse of dimensionality. Additionally, the good generalization performance of SVMs increases the robustness of control system. The method of designing SVM inverse learning controller was presented. The proposed method is demonstrated on tracking problems and the performance is satisfactory.展开更多
Background:Depression is a kind of emotional disorders caused by a variety of factors,with the accelerating pace of life,people in life and work facing competition pressure is increasing,the incidence of depression is...Background:Depression is a kind of emotional disorders caused by a variety of factors,with the accelerating pace of life,people in life and work facing competition pressure is increasing,the incidence of depression is increasing year by year,so the in-depth study of the pathogenesis of depression,and the development of depression risk prediction model is becoming increasingly important.Method:This study data is derived from the 2017–2018 follow-up data from the National Health and Nutrition Examination Survey database,a publicly available database using a multi-stage,hierarchical,clustered,probability sampling design to determine a nationally representative sample of non-institutionalized US civilians.Participants completed home interviews,laboratory measurements,and a physical examination.Details of the survey design have been published previously.This study evaluated the risk factors for the occurrence of depression from this study from multiple variables such as age,sex,and combined complications.Four machine learning algorithms(logistic regression,Lasso regression,support vector machine,random forest)were used to establish predictive classification models and compare the area under the subject operating feature curve and accuracy.The dataset was validated using a 10-fold cross-validation.Result:We excluded the invalid samples for 815 included samples,of which 570 cases were divided into the validation set and 245 cases were divided into the training set.The area under the curve(AUC)of Nomogram establishing risk of depression based on logistic regression was 0.73.Among the three machine learning models,the Lasso regression-based model AUC was 0.548,a mean AUC for support vector machines was 0.695,and a random forest AUC of 0.613.The support vector machines-based model predicted the best performance compared to other machine models.Conclusion:Random forest-based prediction models are able to assist clinicians in providing decision support when it is difficult to give an exact diagnosis.The model has good clinical utility and facilitates clinicians to identify high-risk patients and perform individualized treatment.The established four models of logistic regression,Lasso regression,support vector machine,and random forest all have good predictive power.展开更多
Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational powe...Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational power of quantum systems hold the potential to outpace classical counterparts in solving complex optimization problems,which are pervasive in machine learning.Quantum Support Vector Machine(QSVM)is a quantum machine learning algorithm inspired by classical Support Vector Machine(SVM)that exploits quantum parallelism to efficiently classify data points in high-dimensional feature spaces.We provide a comprehensive overview of the underlying principles of QSVM,elucidating how different quantum feature maps and quantum kernels enable the manipulation of quantum states to perform classification tasks.Through a comparative analysis,we reveal the quantum advantage achieved by these algorithms in terms of speedup and solution quality.As a case study,we explored the potential of quantum paradigms in the context of a real-world problem:classifying pancreatic cancer biomarker data.The Support Vector Classifier(SVC)algorithm was employed for the classical approach while the QSVM algorithm was executed on a quantum simulator provided by the Qiskit quantum computing framework.The classical approach as well as the quantum-based techniques reported similar accuracy.This uniformity suggests that these methods effectively captured similar underlying patterns in the dataset.Remarkably,quantum implementations exhibited substantially reduced execution times demonstrating the potential of quantum approaches in enhancing classification efficiency.This affirms the growing significance of quantum computing as a transformative tool for augmenting machine learning paradigms and also underscores the potency of quantum execution for computational acceleration.展开更多
The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classificati...The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools.展开更多
A multi-layer adaptive optimizing parameters algorithm is developed forimproving least squares support vector machines (LS-SVM) , and a military aircraft life-cycle-cost(LCC) intelligent estimation model is proposed b...A multi-layer adaptive optimizing parameters algorithm is developed forimproving least squares support vector machines (LS-SVM) , and a military aircraft life-cycle-cost(LCC) intelligent estimation model is proposed based on the improved LS-SVM. The intelligent costestimation process is divided into three steps in the model. In the first step, a cost-drive-factorneeds to be selected, which is significant for cost estimation. In the second step, militaryaircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM. Thenthe model can be used for new type aircraft cost estimation. Chinese military aircraft costs areestimated in the paper. The results show that the estimated costs by the new model are closer to thetrue costs than that of the traditionally used methods.展开更多
In recent years,support vector machine learning methods have gradually become the main research direction of machine learning.The support vector machine has a small structural risk compared with the traditional learni...In recent years,support vector machine learning methods have gradually become the main research direction of machine learning.The support vector machine has a small structural risk compared with the traditional learning method,which can make the training error and the classifier capacity reach a relatively balanced state.Secondly,it also has the advantages of strong adaptability and strong promotion ability and has been widely praised by the industry.The following discussion focuses on the application of support vector machine in machine learning.展开更多
Cervical cancer is screened by pap smear methodology for detection and classification purposes.Pap smear images of the cervical region are employed to detect and classify the abnormality of cervical tissues.In this pa...Cervical cancer is screened by pap smear methodology for detection and classification purposes.Pap smear images of the cervical region are employed to detect and classify the abnormality of cervical tissues.In this paper,we proposed the first system that it ables to classify the pap smear images into a seven classes problem.Pap smear images are exploited to design a computer-aided diagnoses system to classify the abnormality in cervical images cells.Automated features that have been extracted using ResNet101 are employed to discriminate seven classes of images in Support Vector Machine(SVM)classifier.The success of this proposed system in distinguishing between the levels of normal cases with 100%accuracy and 100%sensitivity.On top of that,it can distinguish between normal and abnormal cases with an accuracy of 100%.The high level of abnormality is then studied and classified with a high accuracy.On the other hand,the low level of abnormality is studied separately and classified into two classes,mild and moderate dysplasia,with∼92%accuracy.The proposed system is a built-in cascading manner with five models of polynomial(SVM)classifier.The overall accuracy in training for all cases is 100%,while the overall test for all seven classes is around 92%in the test phase and overall accuracy reaches 97.3%.The proposed system facilitates the process of detection and classification of cervical cells in pap smear images and leads to early diagnosis of cervical cancer,which may lead to an increase in the survival rate in women.展开更多
Interior Alaska has a short growing season of 110 d.The knowledge of timings of crop flowering and maturity will provide the information for the agricultural decision making.In this study,six machine learning algorith...Interior Alaska has a short growing season of 110 d.The knowledge of timings of crop flowering and maturity will provide the information for the agricultural decision making.In this study,six machine learning algorithms,namely Linear Discriminant Analysis(LDA),Support Vector Machines(SVMs),k-nearest neighbor(kNN),Naïve Bayes(NB),Recursive Partitioning and Regression Trees(RPART),and Random Forest(RF),were selected to forecast the timings of barley flowering and maturity based on the Alaska Crop Datasets and climate data from 1991 to 2016 in Fairbanks,Alaska.Among 32 models fit to forecast flowering time,two from LDA,12 from SVMs,four from NB,three from RF outperformed models from other algorithms with the highest accuracy.Models from kNN performed worst to forecast flowering time.Among 32 models fit to forecast maturity time,two models from LDA outperformed the models from other algorithms.Models from kNN and RPART performed worst to forecast maturity time.Models from machine learning methods also provided a variable importance explanation.In this study,four out of six algorithms gave the same variable importance order.Sowing date was the most important variable to forecast flowering but less important variable to forecast maturity.The daily maximum temperature may be more important than daily minimum temperature to fit flowering models while daily minimum temperature may be more important than daily maximum temperature to fit maturity models.The results indicate that models from machine learning provide a promising technique in forecasting the timings of flowering and maturity of barley.展开更多
Named Entity Recognition aims to identify and to classify rigid designators in text such as proper names, biological species, and temporal expressions into some predefined categories. There has been growing interest i...Named Entity Recognition aims to identify and to classify rigid designators in text such as proper names, biological species, and temporal expressions into some predefined categories. There has been growing interest in this field of research since the early 1990s. Named Entity Recognition has a vital role in different fields of natural language processing such as Machine Translation, Information Extraction, Question Answering System and various other fields. In this paper, Named Entity Recognition for Nepali text, based on the Support Vector Machine (SVM) is presented which is one of machine learning approaches for the classification task. A set of features are extracted from training data set. Accuracy and efficiency of SVM classifier are analyzed in three different sizes of training data set. Recognition systems are tested with ten datasets for Nepali text. The strength of this work is the efficient feature extraction and the comprehensive recognition techniques. The Support Vector Machine based Named Entity Recognition is limited to use a certain set of features and it uses a small dictionary which affects its performance. The learning performance of recognition system is observed. It is found that system can learn well from the small set of training data and increase the rate of learning on the increment of training size.展开更多
Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power ...Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power quality classification techniques focused on single power quality events and did not include an optimal feature selection process. This paper presents a classification system that employs Wavelet Transform and the RMS profile to extract the main features of the measured waveforms containing either single or complex disturbances. A data mining process is designed to select the optimal set of features that better describes each disturbance present in the waveform. Support Vector Machine binary classifiers organized in a “One Vs Rest” architecture are individually optimized to classify single and complex disturbances. The parameters that rule the performance of each binary classifier are also individually adjusted using a grid search algorithm that helps them achieve optimal performance. This specialized process significantly improves the total classification accuracy. Several single and complex disturbances were simulated in order to train and test the algorithm. The results show that the classifier is capable of identifying >99% of single disturbances and >97% of complex disturbances.展开更多
基金supported by the projects of the China Geological Survey(DD20221729,DD20190291)Zhuhai Urban Geological Survey(including informatization)(MZCD–2201–008).
文摘Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.
基金financially supported by the National Natural Science Foundation of China(No.51974028)。
文摘The martensitic transformation temperature is the basis for the application of shape memory alloys(SMAs),and the ability to quickly and accurately predict the transformation temperature of SMAs has very important practical significance.In this work,machine learning(ML)methods were utilized to accelerate the search for shape memory alloys with targeted properties(phase transition temperature).A group of component data was selected to design shape memory alloys using reverse design method from numerous unexplored data.Component modeling and feature modeling were used to predict the phase transition temperature of the shape memory alloys.The experimental results of the shape memory alloys were obtained to verify the effectiveness of the support vector regression(SVR)model.The results show that the machine learning model can obtain target materials more efficiently and pertinently,and realize the accurate and rapid design of shape memory alloys with specific target phase transition temperature.On this basis,the relationship between phase transition temperature and material descriptors is analyzed,and it is proved that the key factors affecting the phase transition temperature of shape memory alloys are based on the strength of the bond energy between atoms.This work provides new ideas for the controllable design and performance optimization of Cu-based shape memory alloys.
文摘The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss is not differentiable,and the LASSO penalty does not have the Oracle property.In this paper,the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property,contributing to higher accuracy than traditional SVMs.It is experimentally demonstrated that the two non-convex huberized-SVM methods,smoothly clipped absolute deviation huberized-SVM(SCAD-HSVM)and minimax concave penalty huberized-SVM(MCP-HSVM),outperform the traditional SVM method in terms of the prediction accuracy and classifier performance.They are also superior in terms of variable selection,especially when there is a high linear correlation between the variables.When they are applied to the prediction of listed companies,the variables that can affect and predict financial distress are accurately filtered out.Among all the indicators,the indicators per share have the greatest influence while those of solvency have the weakest influence.Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision.
基金This work was financially supported by National Natural Science Foundation of China(41972262)Hebei Natural Science Foundation for Excellent Young Scholars(D2020504032)+1 种基金Central Plains Science and technology innovation leader Project(214200510030)Key research and development Project of Henan province(221111321500).
文摘Landslide is a serious natural disaster next only to earthquake and flood,which will cause a great threat to people’s lives and property safety.The traditional research of landslide disaster based on experience-driven or statistical model and its assessment results are subjective,difficult to quantify,and no pertinence.As a new research method for landslide susceptibility assessment,machine learning can greatly improve the landslide susceptibility model’s accuracy by constructing statistical models.Taking Western Henan for example,the study selected 16 landslide influencing factors such as topography,geological environment,hydrological conditions,and human activities,and 11 landslide factors with the most significant influence on the landslide were selected by the recursive feature elimination(RFE)method.Five machine learning methods[Support Vector Machines(SVM),Logistic Regression(LR),Random Forest(RF),Extreme Gradient Boosting(XGBoost),and Linear Discriminant Analysis(LDA)]were used to construct the spatial distribution model of landslide susceptibility.The models were evaluated by the receiver operating characteristic curve and statistical index.After analysis and comparison,the XGBoost model(AUC 0.8759)performed the best and was suitable for dealing with regression problems.The model had a high adaptability to landslide data.According to the landslide susceptibility map of the five models,the overall distribution can be observed.The extremely high and high susceptibility areas are distributed in the Funiu Mountain range in the southwest,the Xiaoshan Mountain range in the west,and the Yellow River Basin in the north.These areas have large terrain fluctuations,complicated geological structural environments and frequent human engineering activities.The extremely high and highly prone areas were 12043.3 km^(2)and 3087.45 km^(2),accounting for 47.61%and 12.20%of the total area of the study area,respectively.Our study reflects the distribution of landslide susceptibility in western Henan Province,which provides a scientific basis for regional disaster warning,prediction,and resource protection.The study has important practical significance for subsequent landslide disaster management.
文摘Classical machine learning, which is at the intersection of artificial intelligence and statistics, investigates and formulates algorithms which can be used to discover patterns in the given data and also make some forecasts based on the given data. Classical machine learning has its quantum part, which is known as quantum machine learning (QML). QML, which is a field of quantum computing, uses some of the quantum mechanical principles and concepts which include superposition, entanglement and quantum adiabatic theorem to assess the data and make some forecasts based on the data. At the present moment, research in QML has taken two main approaches. The first approach involves implementing the computationally expensive subroutines of classical machine learning algorithms on a quantum computer. The second approach concerns using classical machine learning algorithms on a quantum information, to speed up performance of the algorithms. The work presented in this manuscript proposes a quantum support vector algorithm that can be used to forecast solar irradiation. The novelty of this work is in using quantum mechanical principles for application in machine learning. Python programming language was used to simulate the performance of the proposed algorithm on a classical computer. Simulation results that were obtained show the usefulness of this algorithm for predicting solar irradiation.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金This project was funded by the Open Fund of the Key Laboratory of Exploration Technologies for Oil and Gas Resources,the Ministry of Education(No.K2021-03)National Natural Science Foundation of China(No.42106213)+2 种基金the Hainan Provincial Natural Science Foundation of China(No.421QN281)the China Postdoctoral Science Foundation(Nos.2021M690161 and 2021T140691)the Postdoctorate Funded Project in Hainan Province.
文摘The total organic carbon content usually determines the hydrocarbon generation potential of a formation.A higher total organic carbon content often corresponds to a greater possibility of generating large amounts of oil or gas.Hence,accurately calculating the total organic carbon content in a formation is very important.Present research is focused on precisely calculating the total organic carbon content based on machine learning.At present,many machine learning methods,including backpropagation neural networks,support vector regression,random forests,extreme learning machines,and deep learning,are employed to evaluate the total organic carbon content.However,the principles and perspectives of various machine learning algorithms are quite different.This paper reviews the application of various machine learning algorithms to deal with total organic carbon content evaluation problems.Of various machine learning algorithms used for TOC content predication,two algorithms,the backpropagation neural network and support vector regression are the most commonly used,and the backpropagation neural network is sometimes combined with many other algorithms to achieve better results.Additionally,combining multiple algorithms or using deep learning to increase the number of network layers can further improve the total organic carbon content prediction.The prediction by backpropagation neural network may be better than that by support vector regression;nevertheless,using any type of machine learning algorithm improves the total organic carbon content prediction in a given research block.According to some published literature,the determination coefficient(R^(2))can be increased by up to 0.46 after using machine learning.Deep learning algorithms may be the next breakthrough direction that can significantly improve the prediction of the total organic carbon content.Evaluating the total organic carbon content based on machine learning is of great significance.
文摘Path loss prediction models are vital for accurate signal propagation in wireless channels. Empirical and deterministic models used in path loss predictions have not produced optimal results. In this paper, we introduced machine learning algorithms to path loss predictions because it offers a flexible network architecture and extensive data can be used. We introduced support vector regression (SVR) and radial basis function (RBF) models to path loss predictions in the investigated environments. The SVR model was able to process several input parameters without introducing complexity to the network architecture. The RBF on its part provides a good function approximation. Hyperparameter tuning of the machine learning models was carried out in order to achieve optimal results. The performances of the SVR and RBF models were compared and result validated using the root-mean squared error (RMSE). The two machine learning algorithms were also compared with the Cost-231, SUI, Egli, Freespace, Cost-231 W-I models. The analytical models overpredicted path loss. Overall, the machine learning models predicted path loss with greater accuracy than the empirical models. The SVR model performed best across all the indices with RMSE values of 1.378 dB, 1.4523 dB, 2.1568 dB in rural, suburban and urban settings respectively and should therefore be adopted for signal propagation in the investigated environments and beyond.
文摘This paper presents a new algorithm for Support Vector Machine (SVM) training, which trains a machine based on the cluster centers of errors caused by the current machine. Experiments with various training sets show that the computation time of this new algorithm scales almost linear with training set size and thus may be applied to much larger training sets, in comparison to standard quadratic programming (QP) techniques.
基金Taif University Researchers Supporting Project Number(TURSP-2020/73)Taif University,Taif,Saudi Arabia.
文摘Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.
文摘An inverse learning control scheme using the support vector machine (SVM) for regression was proposed. The inverse learning approach is originally researched in the neural networks. Compared with neural networks, SVMs overcome the problems of local minimum and curse of dimensionality. Additionally, the good generalization performance of SVMs increases the robustness of control system. The method of designing SVM inverse learning controller was presented. The proposed method is demonstrated on tracking problems and the performance is satisfactory.
文摘Background:Depression is a kind of emotional disorders caused by a variety of factors,with the accelerating pace of life,people in life and work facing competition pressure is increasing,the incidence of depression is increasing year by year,so the in-depth study of the pathogenesis of depression,and the development of depression risk prediction model is becoming increasingly important.Method:This study data is derived from the 2017–2018 follow-up data from the National Health and Nutrition Examination Survey database,a publicly available database using a multi-stage,hierarchical,clustered,probability sampling design to determine a nationally representative sample of non-institutionalized US civilians.Participants completed home interviews,laboratory measurements,and a physical examination.Details of the survey design have been published previously.This study evaluated the risk factors for the occurrence of depression from this study from multiple variables such as age,sex,and combined complications.Four machine learning algorithms(logistic regression,Lasso regression,support vector machine,random forest)were used to establish predictive classification models and compare the area under the subject operating feature curve and accuracy.The dataset was validated using a 10-fold cross-validation.Result:We excluded the invalid samples for 815 included samples,of which 570 cases were divided into the validation set and 245 cases were divided into the training set.The area under the curve(AUC)of Nomogram establishing risk of depression based on logistic regression was 0.73.Among the three machine learning models,the Lasso regression-based model AUC was 0.548,a mean AUC for support vector machines was 0.695,and a random forest AUC of 0.613.The support vector machines-based model predicted the best performance compared to other machine models.Conclusion:Random forest-based prediction models are able to assist clinicians in providing decision support when it is difficult to give an exact diagnosis.The model has good clinical utility and facilitates clinicians to identify high-risk patients and perform individualized treatment.The established four models of logistic regression,Lasso regression,support vector machine,and random forest all have good predictive power.
文摘Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational power of quantum systems hold the potential to outpace classical counterparts in solving complex optimization problems,which are pervasive in machine learning.Quantum Support Vector Machine(QSVM)is a quantum machine learning algorithm inspired by classical Support Vector Machine(SVM)that exploits quantum parallelism to efficiently classify data points in high-dimensional feature spaces.We provide a comprehensive overview of the underlying principles of QSVM,elucidating how different quantum feature maps and quantum kernels enable the manipulation of quantum states to perform classification tasks.Through a comparative analysis,we reveal the quantum advantage achieved by these algorithms in terms of speedup and solution quality.As a case study,we explored the potential of quantum paradigms in the context of a real-world problem:classifying pancreatic cancer biomarker data.The Support Vector Classifier(SVC)algorithm was employed for the classical approach while the QSVM algorithm was executed on a quantum simulator provided by the Qiskit quantum computing framework.The classical approach as well as the quantum-based techniques reported similar accuracy.This uniformity suggests that these methods effectively captured similar underlying patterns in the dataset.Remarkably,quantum implementations exhibited substantially reduced execution times demonstrating the potential of quantum approaches in enhancing classification efficiency.This affirms the growing significance of quantum computing as a transformative tool for augmenting machine learning paradigms and also underscores the potency of quantum execution for computational acceleration.
文摘The manuscript presents an augmented Lagrangian—fast projected gradient method (ALFPGM) with an improved scheme of working set selection, pWSS, a decomposition based algorithm for training support vector classification machines (SVM). The manuscript describes the ALFPGM algorithm, provides numerical results for training SVM on large data sets, and compares the training times of ALFPGM and Sequential Minimal Minimization algorithms (SMO) from Scikit-learn library. The numerical results demonstrate that ALFPGM with the improved working selection scheme is capable of training SVM with tens of thousands of training examples in a fraction of the training time of some widely adopted SVM tools.
文摘A multi-layer adaptive optimizing parameters algorithm is developed forimproving least squares support vector machines (LS-SVM) , and a military aircraft life-cycle-cost(LCC) intelligent estimation model is proposed based on the improved LS-SVM. The intelligent costestimation process is divided into three steps in the model. In the first step, a cost-drive-factorneeds to be selected, which is significant for cost estimation. In the second step, militaryaircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM. Thenthe model can be used for new type aircraft cost estimation. Chinese military aircraft costs areestimated in the paper. The results show that the estimated costs by the new model are closer to thetrue costs than that of the traditionally used methods.
文摘In recent years,support vector machine learning methods have gradually become the main research direction of machine learning.The support vector machine has a small structural risk compared with the traditional learning method,which can make the training error and the classifier capacity reach a relatively balanced state.Secondly,it also has the advantages of strong adaptability and strong promotion ability and has been widely praised by the industry.The following discussion focuses on the application of support vector machine in machine learning.
基金This work was supported by the Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme(FRGS/1/2021/SKK0/UNIMAP/02/1).
文摘Cervical cancer is screened by pap smear methodology for detection and classification purposes.Pap smear images of the cervical region are employed to detect and classify the abnormality of cervical tissues.In this paper,we proposed the first system that it ables to classify the pap smear images into a seven classes problem.Pap smear images are exploited to design a computer-aided diagnoses system to classify the abnormality in cervical images cells.Automated features that have been extracted using ResNet101 are employed to discriminate seven classes of images in Support Vector Machine(SVM)classifier.The success of this proposed system in distinguishing between the levels of normal cases with 100%accuracy and 100%sensitivity.On top of that,it can distinguish between normal and abnormal cases with an accuracy of 100%.The high level of abnormality is then studied and classified with a high accuracy.On the other hand,the low level of abnormality is studied separately and classified into two classes,mild and moderate dysplasia,with∼92%accuracy.The proposed system is a built-in cascading manner with five models of polynomial(SVM)classifier.The overall accuracy in training for all cases is 100%,while the overall test for all seven classes is around 92%in the test phase and overall accuracy reaches 97.3%.The proposed system facilitates the process of detection and classification of cervical cells in pap smear images and leads to early diagnosis of cervical cancer,which may lead to an increase in the survival rate in women.
文摘Interior Alaska has a short growing season of 110 d.The knowledge of timings of crop flowering and maturity will provide the information for the agricultural decision making.In this study,six machine learning algorithms,namely Linear Discriminant Analysis(LDA),Support Vector Machines(SVMs),k-nearest neighbor(kNN),Naïve Bayes(NB),Recursive Partitioning and Regression Trees(RPART),and Random Forest(RF),were selected to forecast the timings of barley flowering and maturity based on the Alaska Crop Datasets and climate data from 1991 to 2016 in Fairbanks,Alaska.Among 32 models fit to forecast flowering time,two from LDA,12 from SVMs,four from NB,three from RF outperformed models from other algorithms with the highest accuracy.Models from kNN performed worst to forecast flowering time.Among 32 models fit to forecast maturity time,two models from LDA outperformed the models from other algorithms.Models from kNN and RPART performed worst to forecast maturity time.Models from machine learning methods also provided a variable importance explanation.In this study,four out of six algorithms gave the same variable importance order.Sowing date was the most important variable to forecast flowering but less important variable to forecast maturity.The daily maximum temperature may be more important than daily minimum temperature to fit flowering models while daily minimum temperature may be more important than daily maximum temperature to fit maturity models.The results indicate that models from machine learning provide a promising technique in forecasting the timings of flowering and maturity of barley.
文摘Named Entity Recognition aims to identify and to classify rigid designators in text such as proper names, biological species, and temporal expressions into some predefined categories. There has been growing interest in this field of research since the early 1990s. Named Entity Recognition has a vital role in different fields of natural language processing such as Machine Translation, Information Extraction, Question Answering System and various other fields. In this paper, Named Entity Recognition for Nepali text, based on the Support Vector Machine (SVM) is presented which is one of machine learning approaches for the classification task. A set of features are extracted from training data set. Accuracy and efficiency of SVM classifier are analyzed in three different sizes of training data set. Recognition systems are tested with ten datasets for Nepali text. The strength of this work is the efficient feature extraction and the comprehensive recognition techniques. The Support Vector Machine based Named Entity Recognition is limited to use a certain set of features and it uses a small dictionary which affects its performance. The learning performance of recognition system is observed. It is found that system can learn well from the small set of training data and increase the rate of learning on the increment of training size.
文摘Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power quality classification techniques focused on single power quality events and did not include an optimal feature selection process. This paper presents a classification system that employs Wavelet Transform and the RMS profile to extract the main features of the measured waveforms containing either single or complex disturbances. A data mining process is designed to select the optimal set of features that better describes each disturbance present in the waveform. Support Vector Machine binary classifiers organized in a “One Vs Rest” architecture are individually optimized to classify single and complex disturbances. The parameters that rule the performance of each binary classifier are also individually adjusted using a grid search algorithm that helps them achieve optimal performance. This specialized process significantly improves the total classification accuracy. Several single and complex disturbances were simulated in order to train and test the algorithm. The results show that the classifier is capable of identifying >99% of single disturbances and >97% of complex disturbances.