摘要
序贯最小优化算法(SMO)是支持向量机(SVM)训练算法中一种十分有效的改进方法,但针对大规模样本数据时,SMO训练速度仍比较慢。为了提高训练速度,在基本保持训练精度的前提下,提出了一种改进优化策略:即跳过部分与精度无关的向量集、提前结束循环、松弛KKT条件以便收缩工作集。经过几个著名的数据集的试验结果表明,此策略可以大幅缩短SMO的训练时间,并且精度没有明显变化。
A sequential minimal optimization (SMO) algorithm is a very effective method which can improve the training speed of support vector machine (SVM). However, the SMO algorithm is still quite slow in the large-scale datasets. In order to increase the training speed, an optimization strategy which can maintain the training accuracy is proposed. The strategy is to skip the part of the vector irrelevant to accuracy, prematurely finish cycle and relaxe KKT conditions so that it can shrink the working set. The results show that this strategy can significantly reduce the training time and the accuracy is still high in several datasets.
出处
《现代电子技术》
2013年第8期17-19,共3页
Modern Electronics Technique