期刊文献+

稀疏结构化最小二乘双支持向量回归机 被引量:2

Sparse Structured Least Squares Twin Support Vector Regression Machine
下载PDF
导出
摘要 最小二乘双支持向量回归机(LSTSVR)通过引入最小二乘损失将双支持向量回归机(TSVR)中的二次规划问题简化为两个线性方程组的求解,从而大大减少了训练时间。然而,LSTSVR最小化基于最小二乘损失的经验风险易导致以下不足:(1)"过学习"问题;(2)模型的解缺乏稀疏性,难以训练大规模数据。针对(1),提出结构化最小二乘双支持向量回归机(S-LSTSVR)以提升模型的泛化能力;针对(2),进一步利用不完全Choesky分解对核矩阵进行低秩近似,给出求解S-LSTSVR的稀疏算法SS-LSTSVR,使模型能有效地训练大规模数据。人工数据和UCI数据集中的实验证明SS-LSTSVR不但可以避免"过学习",而且能够高效地解决大规模训练问题。 The Least Squares Twin Support Vector Regression(LSTSVR)machine simplifies the quadratic programming problem in the Twin Support Vector Regression(TSVR)machine to the solution of two linear equations by introducing the least squares loss,thus greatly reducing the training time.However,LSTSVR minimizes the empirical risk based on least squares loss,which will lead to the following shortcomings:(1)the problem of"over-learning";(2)the solution of model lacks sparsity and it is difficult to train large-scale data.For(1),the Structured Least Squares Twin Support Vector Regression(S-LSTSVR)is given to improve the generalization ability of the model.For(2),the low rank approximation is carried out to the kernel matrix by using incomplete Choesky decomposition,and an sparse algorithm is given for solving S-LSTSVR mode(l SS-LSTSVR),which makes the model train large-scale data effectively.Experiments on artificial data and UCI data sets show that SS-LSTSVR can avoid"over learning"and can solve large-scale training problems efficiently.
作者 闫丽萍 马家军 陈文兴 YAN Liping;MA Jiajun;CHEN Wenxing(School of Mathematics and Statistics,Xidian University,Xi’an 710126,China;School of Mathematics and Statistics,Ningxia University,Yinchuan 750021,China)
出处 《计算机工程与应用》 CSCD 北大核心 2019年第3期10-14,45,共6页 Computer Engineering and Applications
基金 国家自然科学基金(No.61772020)
关键词 最小二乘双支持向量回归 结构风险最小化 稀疏性 不完全Choesky分解 大规模 Least Squares Twin Support Vector Regression(LSTSVR) structural risk minimization sparsity incom-plete Choesky decomposition large-scale
  • 相关文献

参考文献2

二级参考文献38

  • 1业宁,梁作鹏,董逸生,王厚立.一种SVM非线性回归算法[J].计算机工程,2005,31(20):19-21. 被引量:8
  • 2Vapnik V N. Statistical learning theory[M]. New York, 1998.
  • 3Scholkoph B, Smola A J, Bartlett P L. New support vectoral gorithms[J]. Neural Computation, 2000, 12:1207-1245.
  • 4Suykens J A K, Branbanter J K, Lukas L, et al. Weighted least squares support vector machines: robustness and spare approximation [J]. Neurocomputing, 2002, 48(1): 85-105.
  • 5Lin C-F, Wang S-D. Fuzzy support vector machines[J]. IEEE Trans on Neural Networks, 2002, 13(2): 464-471.
  • 6Tay F E H, Cao L J. Modified support vector machines in financial time series forecasting[J]. Neurocomputing, 2002, 48: 847-861.
  • 7Tay F E H, Cao L J. ε-Descending support vector machines for financial time series forecasting[J]. Neural Processing Letters, 2002, 15(2): 179-195.
  • 8Keoman V, Hadzic I. Support vectors selection by linear programming[A]. Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks[J. Como, Italy, 2000, 5: 193-198.
  • 9Osuna E, Freund R, Girosi F. An improved training algorithm for support vector machine[A]. Proc the 1997 IEEE workshop on neural networks for signal processing[C]. Amelea Island, FL, 1997, 276-285.
  • 10Laskov P. Feasible direction decomposition algorithms for training support vector machines[J]. Machine Learning, 2002, 46(1): 315-349.

共引文献143

同被引文献26

引证文献2

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部