Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were u...Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.展开更多
The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will resu...The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.展开更多
With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most...With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.展开更多
Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant no...Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant nonsingular terminal sliding mode control method based on support vector machine(SVM)is proposed.A SVM is designed to estimate the fault by off-line learning from small sample data with solving convex quadratic programming method and is introduced into a high-gain observer,so as to improve the state estimation and fault detection accuracy when the fault occurs.The state estimation value of the observer is used for state reconfiguration.A novel nonsingular terminal sliding mode surface is designed,and Lyapunov theorem is used to derive a parameter adaptation law and a control law.It is guaranteed that the proposed controller can achieve asymptotical stability which is superior to many advanced fault-tolerant controllers.In addition,the parameter estimation also can help to diagnose the system faults because the faults can be reflected by the parameters variation.Extensive comparative simulation and experimental results illustrate the effectiveness and advancement of the proposed controller compared with several other main-stream controllers.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
AIM:To develop a classifier for traditional Chinese medicine(TCM)syndrome differentiation of diabetic retinopathy(DR),using optimized machine learning algorithms,which can provide the basis for TCM objective and intel...AIM:To develop a classifier for traditional Chinese medicine(TCM)syndrome differentiation of diabetic retinopathy(DR),using optimized machine learning algorithms,which can provide the basis for TCM objective and intelligent syndrome differentiation.METHODS:Collated data on real-world DR cases were collected.A variety of machine learning methods were used to construct TCM syndrome classification model,and the best performance was selected as the basic model.Genetic Algorithm(GA)was used for feature selection to obtain the optimal feature combination.Harris Hawk Optimization(HHO)was used for parameter optimization,and a classification model based on feature selection and parameter optimization was constructed.The performance of the model was compared with other optimization algorithms.The models were evaluated with accuracy,precision,recall,and F1 score as indicators.RESULTS:Data on 970 cases that met screening requirements were collected.Support Vector Machine(SVM)was the best basic classification model.The accuracy rate of the model was 82.05%,the precision rate was 82.34%,the recall rate was 81.81%,and the F1 value was 81.76%.After GA screening,the optimal feature combination contained 37 feature values,which was consistent with TCM clinical practice.The model based on optimal combination and SVM(GA_SVM)had an accuracy improvement of 1.92%compared to the basic classifier.SVM model based on HHO and GA optimization(HHO_GA_SVM)had the best performance and convergence speed compared with other optimization algorithms.Compared with the basic classification model,the accuracy was improved by 3.51%.CONCLUSION:HHO and GA optimization can improve the model performance of SVM in TCM syndrome differentiation of DR.It provides a new method and research idea for TCM intelligent assisted syndrome differentiation.展开更多
The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accura...The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accurate enough.In response to this disadvantage,this study used a method combining grey relational analysis(GRA)and support vectormachine(SVM)and established a set of prediction technical procedures suitable for reservoirs with complex geological conditions.In the case study of the Huangliu Formation in Qiongdongnan Basin,South China Sea,this study first dimensionalized the conventional seismic attributes of Gas Layer Group I and then used the GRA method to obtain the main relational factors.A higher relational degree indicates a higher probability of responding to the attributes of the turbidite channel.This study then accumulated the optimized attributes with the highest relational factors to obtain a first-order accumulated sequence,which was used as the input training sample of the SVM model,thus successfully constructing the SVM turbidite channel model.Drilling results prove that the GRA-SVMmethod has a high drilling coincidence rate.Utilizing the core and logging data and taking full use of the advantages of seismic inversion in predicting the sand boundary of water channels,this study divides the sedimentary microfacies of the Huangliu Formation in the Lingshui 17-2 Gas Field.This comprehensive study has shown that the GRA-SVM method has high accuracy for predicting turbidite channels and can be used as a superior turbidite channel prediction method under complex geological conditions.展开更多
This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to...This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.展开更多
As one of the most important part of weapon system of systems(WSoS),quantitative evaluation of reconnaissance satellite system(RSS)is indispensable during its construction and application.Aiming at the problem of nonl...As one of the most important part of weapon system of systems(WSoS),quantitative evaluation of reconnaissance satellite system(RSS)is indispensable during its construction and application.Aiming at the problem of nonlinear effectiveness evaluation under small sample conditions,we propose an evaluation method based on support vector regression(SVR)to effectively address the defects of traditional methods.Considering the performance of SVR is influenced by the penalty factor,kernel type,and other parameters deeply,the improved grey wolf optimizer(IGWO)is employed for parameter optimization.In the proposed IGWO algorithm,the opposition-based learning strategy is adopted to increase the probability of avoiding the local optima,the mutation operator is used to escape from premature convergence and differential convergence factors are applied to increase the rate of convergence.Numerical experiments of 14 test functions validate the applicability of IGWO algorithm dealing with global optimization.The index system and evaluation method are constructed based on the characteristics of RSS.To validate the proposed IGWO-SVR evaluation method,eight benchmark data sets and combat simulation are employed to estimate the evaluation accuracy,convergence performance and computational complexity.According to the experimental results,the proposed method outperforms several prediction based evaluation methods,verifies the superiority and effectiveness in RSS operational effectiveness evaluation.展开更多
The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing m...The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.展开更多
Hard rock pillar is one of the important structures in engineering design and excavation in underground mines.Accurate and convenient prediction of pillar stability is of great significance for underground space safet...Hard rock pillar is one of the important structures in engineering design and excavation in underground mines.Accurate and convenient prediction of pillar stability is of great significance for underground space safety.This paper aims to develop hybrid support vector machine(SVM)models improved by three metaheuristic algorithms known as grey wolf optimizer(GWO),whale optimization algorithm(WOA)and sparrow search algorithm(SSA)for predicting the hard rock pillar stability.An integrated dataset containing 306 hard rock pillars was established to generate hybrid SVM models.Five parameters including pillar height,pillar width,ratio of pillar width to height,uniaxial compressive strength and pillar stress were set as input parameters.Two global indices,three local indices and the receiver operating characteristic(ROC)curve with the area under the ROC curve(AUC)were utilized to evaluate all hybrid models’performance.The results confirmed that the SSA-SVM model is the best prediction model with the highest values of all global indices and local indices.Nevertheless,the performance of the SSASVM model for predicting the unstable pillar(AUC:0.899)is not as good as those for stable(AUC:0.975)and failed pillars(AUC:0.990).To verify the effectiveness of the proposed models,5 field cases were investigated in a metal mine and other 5 cases were collected from several published works.The validation results indicated that the SSA-SVM model obtained a considerable accuracy,which means that the combination of SVM and metaheuristic algorithms is a feasible approach to predict the pillar stability.展开更多
Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a sig...Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.展开更多
In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the...In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.展开更多
In this study,we developed multiple hybrid machine-learning models to address parameter optimization limitations and enhance the spatial prediction of landslide susceptibility models.We created a geographic informatio...In this study,we developed multiple hybrid machine-learning models to address parameter optimization limitations and enhance the spatial prediction of landslide susceptibility models.We created a geographic information system database,and our analysis results were used to prepare a landslide inventory map containing 359 landslide events identified from Google Earth,aerial photographs,and other validated sources.A support vector regression(SVR)machine-learning model was used to divide the landslide inventory into training(70%)and testing(30%)datasets.The landslide susceptibility map was produced using 14 causative factors.We applied the established gray wolf optimization(GWO)algorithm,bat algorithm(BA),and cuckoo optimization algorithm(COA)to fine-tune the parameters of the SVR model to improve its predictive accuracy.The resultant hybrid models,SVR-GWO,SVR-BA,and SVR-COA,were validated in terms of the area under curve(AUC)and root mean square error(RMSE).The AUC values for the SVR-GWO(0.733),SVR-BA(0.724),and SVR-COA(0.738)models indicate their good prediction rates for landslide susceptibility modeling.SVR-COA had the greatest accuracy,with an RMSE of 0.21687,and SVR-BA had the least accuracy,with an RMSE of 0.23046.The three optimized hybrid models outperformed the SVR model(AUC=0.704,RMSE=0.26689),confirming the ability of metaheuristic algorithms to improve model performance.展开更多
Choosing optimal parameters for support vector regression (SVR) is an important step in SVR. design, which strongly affects the pefformance of SVR. In this paper, based on the analysis of influence of SVR parameters...Choosing optimal parameters for support vector regression (SVR) is an important step in SVR. design, which strongly affects the pefformance of SVR. In this paper, based on the analysis of influence of SVR parameters on generalization error, a new approach with two steps is proposed for selecting SVR parameters, First the kernel function and SVM parameters are optimized roughly through genetic algorithm, then the kernel parameter is finely adjusted by local linear search, This approach has been successfully applied to the prediction model of the sulfur content in hot metal. The experiment results show that the proposed approach can yield better generalization performance of SVR than other methods,展开更多
Metamodeling techniques have been used in robust optimization to reduce the high computational cost of the uncertainty analysis and improve the performance of robust optimization problems with computationally expensiv...Metamodeling techniques have been used in robust optimization to reduce the high computational cost of the uncertainty analysis and improve the performance of robust optimization problems with computationally expensive simulation models. Existing metamodels main focus on polynomial regression(PR), neural networks(NN) and Kriging models, these metamodels are not well suited for large-scale robust optimization problems with small size training sets and high nonlinearity. To address the problem, a reduced approximation model technique based on support vector regression(SVR) is introduced in order to improve the accuracy of metamodels. A robust optimization method based on SVR is presented for problems that involve high dimension and nonlinear. First appropriate design parameter samples are selected by experimental design theories, then the response samples are obtained from the simulations such as finite element analysis, the SVR metamodel is constructed and treated as the mean and the variance of the objective performance functions. Combining other constraints, the robust optimization model is formed which can be solved by genetic algorithm (GA). The applicability of the method developed is demonstrated using a case of two-bar structure system study. The performances of SVR were compared with those of PR, Kriging and back-propagation neural networks(BPNN), the comparison results show that the prediction accuracy of the SVR metamodel was higher than those of other metamodels under uncertainty. The robust optimization solutions are near to the real result, and the proposed method is found to be accurate and efficient for robust optimization. This reaserch provides an efficient method for robust optimization problems with complex structure.展开更多
Prediction of primary quality variables in real time with adaptation capability for varying process conditions is a critical task in process industries.This article focuses on the development of non-linear adaptive so...Prediction of primary quality variables in real time with adaptation capability for varying process conditions is a critical task in process industries.This article focuses on the development of non-linear adaptive soft sensors for prediction of naphtha initial boiling point(IBP)and end boiling point(EBP)in crude distillation unit.In this work,adaptive inferential sensors with linear and non-linear local models are reported based on recursive just in time learning(JITL)approach.The different types of local models designed are locally weighted regression(LWR),multiple linear regression(MLR),partial least squares regression(PLS)and support vector regression(SVR).In addition to model development,the effect of relevant dataset size on model prediction accuracy and model computation time is also investigated.Results show that the JITL model based on support vector regression with iterative single data algorithm optimization(ISDA)local model(JITL-SVR:ISDA)yielded best prediction accuracy in reasonable computation time.展开更多
As the solutions of the least squares support vector regression machine (LS-SVRM) are not sparse, it leads to slow prediction speed and limits its applications. The defects of the ex- isting adaptive pruning algorit...As the solutions of the least squares support vector regression machine (LS-SVRM) are not sparse, it leads to slow prediction speed and limits its applications. The defects of the ex- isting adaptive pruning algorithm for LS-SVRM are that the training speed is slow, and the generalization performance is not satis- factory, especially for large scale problems. Hence an improved algorithm is proposed. In order to accelerate the training speed, the pruned data point and fast leave-one-out error are employed to validate the temporary model obtained after decremental learning. The novel objective function in the termination condition which in- volves the whole constraints generated by all training data points and three pruning strategies are employed to improve the generali- zation performance. The effectiveness of the proposed algorithm is tested on six benchmark datasets. The sparse LS-SVRM model has a faster training speed and better generalization performance.展开更多
Removal of cloud cover on the satellite remote sensing image can effectively improve the availability of remote sensing images. For thin cloud cover, support vector value contourlet transform is used to achieve multi-...Removal of cloud cover on the satellite remote sensing image can effectively improve the availability of remote sensing images. For thin cloud cover, support vector value contourlet transform is used to achieve multi-scale decomposition of the area of thin cloud cover on remote sensing images. Through enhancing coefficients of high frequency and suppressing coefficients of low frequency, the thin cloud is removed. For thick cloud cover, if the areas of thick cloud cover on multi-source or multi-temporal remote sensing images do not overlap, the multi-output support vector regression learning method is used to remove this kind of thick clouds. If the thick cloud cover areas overlap, by using the multi-output learning of the surrounding areas to predict the surface features of the overlapped thick cloud cover areas, this kind of thick cloud is removed. Experimental results show that the proposed cloud removal method can effectively solve the problems of the cloud overlapping and radiation difference among multi-source images. The cloud removal image is clear and smooth.展开更多
基金financially supported by the NationalNatural Science Foundation of China(Grant No.42072309)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)(Grant No.CUGDCJJ202217)+1 种基金the Knowledge Innovation Program of Wuhan-Basic Research(Grant No.2022020801010199)the Hubei Key Laboratory of Blasting Engineering Foundation(HKLBEF202002).
文摘Accurately estimating blasting vibration during rock blasting is the foundation of blasting vibration management.In this study,Tuna Swarm Optimization(TSO),Whale Optimization Algorithm(WOA),and Cuckoo Search(CS)were used to optimize two hyperparameters in support vector regression(SVR).Based on these methods,three hybrid models to predict peak particle velocity(PPV)for bench blasting were developed.Eighty-eight samples were collected to establish the PPV database,eight initial blasting parameters were chosen as input parameters for the predictionmodel,and the PPV was the output parameter.As predictive performance evaluation indicators,the coefficient of determination(R2),rootmean square error(RMSE),mean absolute error(MAE),and a10-index were selected.The normalizedmutual information value is then used to evaluate the impact of various input parameters on the PPV prediction outcomes.According to the research findings,TSO,WOA,and CS can all enhance the predictive performance of the SVR model.The TSO-SVR model provides the most accurate predictions.The performances of the optimized hybrid SVR models are superior to the unoptimized traditional prediction model.The maximum charge per delay impacts the PPV prediction value the most.
基金Hebei Province Key Research and Development Project(No.20313701D)Hebei Province Key Research and Development Project(No.19210404D)+13 种基金Mobile computing and universal equipment for the Beijing Key Laboratory Open Project,The National Social Science Fund of China(17AJL014)Beijing University of Posts and Telecommunications Construction of World-Class Disciplines and Characteristic Development Guidance Special Fund “Cultural Inheritance and Innovation”Project(No.505019221)National Natural Science Foundation of China(No.U1536112)National Natural Science Foundation of China(No.81673697)National Natural Science Foundation of China(61872046)The National Social Science Fund Key Project of China(No.17AJL014)“Blue Fire Project”(Huizhou)University of Technology Joint Innovation Project(CXZJHZ201729)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902218004)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201902024006)Industry-University Cooperation Cooperative Education Project of the Ministry of Education(No.201901197007)Industry-University Cooperation Collaborative Education Project of the Ministry of Education(No.201901199005)The Ministry of Education Industry-University Cooperation Collaborative Education Project(No.201901197001)Shijiazhuang science and technology plan project(236240267A)Hebei Province key research and development plan project(20312701D)。
文摘The distribution of data has a significant impact on the results of classification.When the distribution of one class is insignificant compared to the distribution of another class,data imbalance occurs.This will result in rising outlier values and noise.Therefore,the speed and performance of classification could be greatly affected.Given the above problems,this paper starts with the motivation and mathematical representing of classification,puts forward a new classification method based on the relationship between different classification formulations.Combined with the vector characteristics of the actual problem and the choice of matrix characteristics,we firstly analyze the orderly regression to introduce slack variables to solve the constraint problem of the lone point.Then we introduce the fuzzy factors to solve the problem of the gap between the isolated points on the basis of the support vector machine.We introduce the cost control to solve the problem of sample skew.Finally,based on the bi-boundary support vector machine,a twostep weight setting twin classifier is constructed.This can help to identify multitasks with feature-selected patterns without the need for additional optimizers,which solves the problem of large-scale classification that can’t deal effectively with the very low category distribution gap.
基金supported in part by National Natural Science Foundation of China(Nos.62102311,62202377,62272385)in part by Natural Science Basic Research Program of Shaanxi(Nos.2022JQ-600,2022JM-353,2023-JC-QN-0327)+2 种基金in part by Shaanxi Distinguished Youth Project(No.2022JC-47)in part by Scientific Research Program Funded by Shaanxi Provincial Education Department(No.22JK0560)in part by Distinguished Youth Talents of Shaanxi Universities,and in part by Youth Innovation Team of Shaanxi Universities.
文摘With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.
基金Supported by National Natural Science Foundation of China (Grant No.51975294)Fundamental Research Funds for the Central Universities of China (Grant No.30922010706)。
文摘Effective fault diagnosis and fault-tolerant control method for aeronautics electromechanical actuator is concerned in this paper.By borrowing the advantages of model-driven and data-driven methods,a fault tolerant nonsingular terminal sliding mode control method based on support vector machine(SVM)is proposed.A SVM is designed to estimate the fault by off-line learning from small sample data with solving convex quadratic programming method and is introduced into a high-gain observer,so as to improve the state estimation and fault detection accuracy when the fault occurs.The state estimation value of the observer is used for state reconfiguration.A novel nonsingular terminal sliding mode surface is designed,and Lyapunov theorem is used to derive a parameter adaptation law and a control law.It is guaranteed that the proposed controller can achieve asymptotical stability which is superior to many advanced fault-tolerant controllers.In addition,the parameter estimation also can help to diagnose the system faults because the faults can be reflected by the parameters variation.Extensive comparative simulation and experimental results illustrate the effectiveness and advancement of the proposed controller compared with several other main-stream controllers.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金Supported by Hunan Province Traditional Chinese Medicine Research Project(No.B2023043)Hunan Provincial Department of Education Scientific Research Project(No.22B0386)Hunan University of Traditional Chinese Medicine Campus level Research Fund Project(No.2022XJZKC004).
文摘AIM:To develop a classifier for traditional Chinese medicine(TCM)syndrome differentiation of diabetic retinopathy(DR),using optimized machine learning algorithms,which can provide the basis for TCM objective and intelligent syndrome differentiation.METHODS:Collated data on real-world DR cases were collected.A variety of machine learning methods were used to construct TCM syndrome classification model,and the best performance was selected as the basic model.Genetic Algorithm(GA)was used for feature selection to obtain the optimal feature combination.Harris Hawk Optimization(HHO)was used for parameter optimization,and a classification model based on feature selection and parameter optimization was constructed.The performance of the model was compared with other optimization algorithms.The models were evaluated with accuracy,precision,recall,and F1 score as indicators.RESULTS:Data on 970 cases that met screening requirements were collected.Support Vector Machine(SVM)was the best basic classification model.The accuracy rate of the model was 82.05%,the precision rate was 82.34%,the recall rate was 81.81%,and the F1 value was 81.76%.After GA screening,the optimal feature combination contained 37 feature values,which was consistent with TCM clinical practice.The model based on optimal combination and SVM(GA_SVM)had an accuracy improvement of 1.92%compared to the basic classifier.SVM model based on HHO and GA optimization(HHO_GA_SVM)had the best performance and convergence speed compared with other optimization algorithms.Compared with the basic classification model,the accuracy was improved by 3.51%.CONCLUSION:HHO and GA optimization can improve the model performance of SVM in TCM syndrome differentiation of DR.It provides a new method and research idea for TCM intelligent assisted syndrome differentiation.
基金grateful for Science and Technology Innovation Ability Cultivation Project of Hebei Provincial Planning for College and Middle School Students(22E50590D)Priority Research Project of Langfang Education Sciences Planning(JCJY202130).
文摘The turbidite channel of South China Sea has been highly concerned.Influenced by the complex fault and the rapid phase change of lithofacies,predicting the channel through conventional seismic attributes is not accurate enough.In response to this disadvantage,this study used a method combining grey relational analysis(GRA)and support vectormachine(SVM)and established a set of prediction technical procedures suitable for reservoirs with complex geological conditions.In the case study of the Huangliu Formation in Qiongdongnan Basin,South China Sea,this study first dimensionalized the conventional seismic attributes of Gas Layer Group I and then used the GRA method to obtain the main relational factors.A higher relational degree indicates a higher probability of responding to the attributes of the turbidite channel.This study then accumulated the optimized attributes with the highest relational factors to obtain a first-order accumulated sequence,which was used as the input training sample of the SVM model,thus successfully constructing the SVM turbidite channel model.Drilling results prove that the GRA-SVMmethod has a high drilling coincidence rate.Utilizing the core and logging data and taking full use of the advantages of seismic inversion in predicting the sand boundary of water channels,this study divides the sedimentary microfacies of the Huangliu Formation in the Lingshui 17-2 Gas Field.This comprehensive study has shown that the GRA-SVM method has high accuracy for predicting turbidite channels and can be used as a superior turbidite channel prediction method under complex geological conditions.
文摘This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.
基金the National Defense Science and Technology Key Laboratory Fund of China(XM2020XT1023).
文摘As one of the most important part of weapon system of systems(WSoS),quantitative evaluation of reconnaissance satellite system(RSS)is indispensable during its construction and application.Aiming at the problem of nonlinear effectiveness evaluation under small sample conditions,we propose an evaluation method based on support vector regression(SVR)to effectively address the defects of traditional methods.Considering the performance of SVR is influenced by the penalty factor,kernel type,and other parameters deeply,the improved grey wolf optimizer(IGWO)is employed for parameter optimization.In the proposed IGWO algorithm,the opposition-based learning strategy is adopted to increase the probability of avoiding the local optima,the mutation operator is used to escape from premature convergence and differential convergence factors are applied to increase the rate of convergence.Numerical experiments of 14 test functions validate the applicability of IGWO algorithm dealing with global optimization.The index system and evaluation method are constructed based on the characteristics of RSS.To validate the proposed IGWO-SVR evaluation method,eight benchmark data sets and combat simulation are employed to estimate the evaluation accuracy,convergence performance and computational complexity.According to the experimental results,the proposed method outperforms several prediction based evaluation methods,verifies the superiority and effectiveness in RSS operational effectiveness evaluation.
文摘The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.
基金supported by the National Natural Science Foundation Project of China(Nos.72088101 and 42177164)the Distinguished Youth Science Foundation of Hunan Province of China(No.2022JJ10073)The first author was funded by China Scholarship Council(No.202106370038).
文摘Hard rock pillar is one of the important structures in engineering design and excavation in underground mines.Accurate and convenient prediction of pillar stability is of great significance for underground space safety.This paper aims to develop hybrid support vector machine(SVM)models improved by three metaheuristic algorithms known as grey wolf optimizer(GWO),whale optimization algorithm(WOA)and sparrow search algorithm(SSA)for predicting the hard rock pillar stability.An integrated dataset containing 306 hard rock pillars was established to generate hybrid SVM models.Five parameters including pillar height,pillar width,ratio of pillar width to height,uniaxial compressive strength and pillar stress were set as input parameters.Two global indices,three local indices and the receiver operating characteristic(ROC)curve with the area under the ROC curve(AUC)were utilized to evaluate all hybrid models’performance.The results confirmed that the SSA-SVM model is the best prediction model with the highest values of all global indices and local indices.Nevertheless,the performance of the SSASVM model for predicting the unstable pillar(AUC:0.899)is not as good as those for stable(AUC:0.975)and failed pillars(AUC:0.990).To verify the effectiveness of the proposed models,5 field cases were investigated in a metal mine and other 5 cases were collected from several published works.The validation results indicated that the SSA-SVM model obtained a considerable accuracy,which means that the combination of SVM and metaheuristic algorithms is a feasible approach to predict the pillar stability.
文摘Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.
基金Project supported by the National Natural Science Foundation of China (Grant No 60573065)the Natural Science Foundation of Shandong Province,China (Grant No Y2007G33)the Key Subject Research Foundation of Shandong Province,China(Grant No XTD0708)
文摘In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.
基金supported by the Basic Research Project of the Korea Institute of Geoscience and Mineral Resources(KIGAM)Project of Environmental Business Big Data Platform and Center Construction funded by the Ministry of Science and ICT。
文摘In this study,we developed multiple hybrid machine-learning models to address parameter optimization limitations and enhance the spatial prediction of landslide susceptibility models.We created a geographic information system database,and our analysis results were used to prepare a landslide inventory map containing 359 landslide events identified from Google Earth,aerial photographs,and other validated sources.A support vector regression(SVR)machine-learning model was used to divide the landslide inventory into training(70%)and testing(30%)datasets.The landslide susceptibility map was produced using 14 causative factors.We applied the established gray wolf optimization(GWO)algorithm,bat algorithm(BA),and cuckoo optimization algorithm(COA)to fine-tune the parameters of the SVR model to improve its predictive accuracy.The resultant hybrid models,SVR-GWO,SVR-BA,and SVR-COA,were validated in terms of the area under curve(AUC)and root mean square error(RMSE).The AUC values for the SVR-GWO(0.733),SVR-BA(0.724),and SVR-COA(0.738)models indicate their good prediction rates for landslide susceptibility modeling.SVR-COA had the greatest accuracy,with an RMSE of 0.21687,and SVR-BA had the least accuracy,with an RMSE of 0.23046.The three optimized hybrid models outperformed the SVR model(AUC=0.704,RMSE=0.26689),confirming the ability of metaheuristic algorithms to improve model performance.
文摘Choosing optimal parameters for support vector regression (SVR) is an important step in SVR. design, which strongly affects the pefformance of SVR. In this paper, based on the analysis of influence of SVR parameters on generalization error, a new approach with two steps is proposed for selecting SVR parameters, First the kernel function and SVM parameters are optimized roughly through genetic algorithm, then the kernel parameter is finely adjusted by local linear search, This approach has been successfully applied to the prediction model of the sulfur content in hot metal. The experiment results show that the proposed approach can yield better generalization performance of SVR than other methods,
基金supported by National Natural Science Foundation of China (Grant No.60572007)National Basic Research Program of China(973 Program,Grant No.613580202)
文摘Metamodeling techniques have been used in robust optimization to reduce the high computational cost of the uncertainty analysis and improve the performance of robust optimization problems with computationally expensive simulation models. Existing metamodels main focus on polynomial regression(PR), neural networks(NN) and Kriging models, these metamodels are not well suited for large-scale robust optimization problems with small size training sets and high nonlinearity. To address the problem, a reduced approximation model technique based on support vector regression(SVR) is introduced in order to improve the accuracy of metamodels. A robust optimization method based on SVR is presented for problems that involve high dimension and nonlinear. First appropriate design parameter samples are selected by experimental design theories, then the response samples are obtained from the simulations such as finite element analysis, the SVR metamodel is constructed and treated as the mean and the variance of the objective performance functions. Combining other constraints, the robust optimization model is formed which can be solved by genetic algorithm (GA). The applicability of the method developed is demonstrated using a case of two-bar structure system study. The performances of SVR were compared with those of PR, Kriging and back-propagation neural networks(BPNN), the comparison results show that the prediction accuracy of the SVR metamodel was higher than those of other metamodels under uncertainty. The robust optimization solutions are near to the real result, and the proposed method is found to be accurate and efficient for robust optimization. This reaserch provides an efficient method for robust optimization problems with complex structure.
文摘Prediction of primary quality variables in real time with adaptation capability for varying process conditions is a critical task in process industries.This article focuses on the development of non-linear adaptive soft sensors for prediction of naphtha initial boiling point(IBP)and end boiling point(EBP)in crude distillation unit.In this work,adaptive inferential sensors with linear and non-linear local models are reported based on recursive just in time learning(JITL)approach.The different types of local models designed are locally weighted regression(LWR),multiple linear regression(MLR),partial least squares regression(PLS)and support vector regression(SVR).In addition to model development,the effect of relevant dataset size on model prediction accuracy and model computation time is also investigated.Results show that the JITL model based on support vector regression with iterative single data algorithm optimization(ISDA)local model(JITL-SVR:ISDA)yielded best prediction accuracy in reasonable computation time.
基金supported by the National Natural Science Foundation of China (61074127)
文摘As the solutions of the least squares support vector regression machine (LS-SVRM) are not sparse, it leads to slow prediction speed and limits its applications. The defects of the ex- isting adaptive pruning algorithm for LS-SVRM are that the training speed is slow, and the generalization performance is not satis- factory, especially for large scale problems. Hence an improved algorithm is proposed. In order to accelerate the training speed, the pruned data point and fast leave-one-out error are employed to validate the temporary model obtained after decremental learning. The novel objective function in the termination condition which in- volves the whole constraints generated by all training data points and three pruning strategies are employed to improve the generali- zation performance. The effectiveness of the proposed algorithm is tested on six benchmark datasets. The sparse LS-SVRM model has a faster training speed and better generalization performance.
基金supported by the National Natural Science Foundation of China(61172127)the Natural Science Foundation of Anhui Province(1408085MF121)
文摘Removal of cloud cover on the satellite remote sensing image can effectively improve the availability of remote sensing images. For thin cloud cover, support vector value contourlet transform is used to achieve multi-scale decomposition of the area of thin cloud cover on remote sensing images. Through enhancing coefficients of high frequency and suppressing coefficients of low frequency, the thin cloud is removed. For thick cloud cover, if the areas of thick cloud cover on multi-source or multi-temporal remote sensing images do not overlap, the multi-output support vector regression learning method is used to remove this kind of thick clouds. If the thick cloud cover areas overlap, by using the multi-output learning of the surrounding areas to predict the surface features of the overlapped thick cloud cover areas, this kind of thick cloud is removed. Experimental results show that the proposed cloud removal method can effectively solve the problems of the cloud overlapping and radiation difference among multi-source images. The cloud removal image is clear and smooth.