摘要
支持向量机(SVM)作为一种高效的分类模型,其性能在很大程度上取决于超参数的选择。本文将SVM的超参数选择问题重新构建为一个双层规划问题,结合正向模式法和梯度下降法来解决这一问题,从而获得优化后的SVM模型。为了应对高维数据的挑战,本文采用了主成分分析法(PCA)对原始数据进行降维处理,从而提升了SVM模型在高维小样本数据上的表现。通过与当前流行的三种方法:网格搜索、贝叶斯优化和模拟退火算法进行比较,结果表明,采用双层规划方法得到的SVM模型准确率为98.2%,召回率为100%,训练时间为0.768 s,分别优于其他三种方法,说明本文提出的方法得到的模型具有更好的预测效果。As an efficient classification model, the performance of Support Vector Machine (SVM) depends largely on the hyperparameter selection. In this paper, the hyperparameter selection problem of SVM is reconstructed into a bilevel optimization problem, which is combined with the forward mode method and Gradient descent method to solve this problem, resulting in an optimized SVM model. To tackle the challenges posed by high-dimensional data, this paper employs Principal Component Analysis (PCA) for dimensionality reduction on the original data, thereby enhancing the performance of the SVM model on high-dimensional, small-sample datasets. Comparing the results with three currently popular methods—grid search, Bayesian optimization, and simulated annealing—shows that the SVM model obtained through the proposed bilevel optimization method achieves an accuracy of 98.2%, a recall of 100%, and a training time of 0.768 seconds, outperforming the other three methods. This indicates that the model obtained through our proposed approach has better predictive effectiveness.
出处
《应用数学进展》
2024年第10期4601-4609,共9页
Advances in Applied Mathematics