期刊文献+

支持向量机理论与基于规划的神经网络学习算法 被引量:37

The Theory of SVM and Programming Based Learning Algorithms in Neural Networks
下载PDF
导出
摘要 近年来支持向量机 (SVM)理论得到国外学者高度的重视 ,普遍认为这是神经网络学习的新研究方向 ,近来也开始得到国内学者的注意 .该文将研究 SVM理论与神经网络的规划算法的关系 ,首先指出 ,Vapnik的基于SVM的算法与该文作者 1994年提出的神经网络的基于规划的算法是等价的 ,即在样本集是线性可分的情况下 ,二者求到的均是最大边缘 (maxim al m argin)解 .不同的是 ,前者 (通常用拉格郎日乘子法 )求解的复杂性将随规模呈指数增长 ,而后者的复杂性是规模的多项式函数 .其次 ,作者将规划算法化为求一点到某一凸集上的投影 ,利用这个几何的直观 ,给出一个构造性的迭代求解算法——“单纯形迭代算法”.新算法有很强的几何直观性 ,这个直观性将加深对神经网络 (线性可分情况下 )学习的理解 ,并由此导出一个样本集是线性可分的充分必要条件 .另外 ,新算法对知识扩充问题 ,给出一个非常方便的增量学习算法 .最后指出 ,“将一些必须满足的条件 ,化成问题的约束条件 ,将网络的某一性能 ,作为目标函数 ,将网络的学习问题化为某种规划问题来求解”的原则 ,将是研究神经网络学习问题的一个十分有效的办法 . Recently Support Vector Machine (SVM) theory has received universal recognition by many researchers abroad. It was regarded as a new research direction of neural network's learning and has recognized by people in China as well. The relationship between SVM based algorithms and programming based learning algorithms in neural networks is addressed in this paper. First, we will show that SVM based algorithms (Vapnik) and programming based algorithms we presented in 1999 are equivalent. That is, under the linear separability condition, both can obtain the maximal margin (optimal) solutions. In order to have the optimal solution, the computational complexity of the SVM based algorithms grows exponentially with the training sample size, since the Lagrange multiplier method is often used in the algorithms. But the latter only has polynomial complexity. Second, we show that the essence of programming based algorithms is to solve the projection of a point on some convex set. By using the above geometric intuition, an iterative algorithm, the simplex iterative algorithm, is presented. From the geometric intuition, we can also get a deeper understanding of neural network's learning (under linear separability). Then, the necessary and sufficient conditions of linear separability of training samples are obtained. Finally, we show that the following principle can be used as a new way of solving neural network's learning problem. The principle is:“regarding the conditions the network should satisfy as constraints, and the performances the network should reach as objective functions, the learning of a neural network can be regarded as a programming problem”.
作者 张铃
出处 《计算机学报》 EI CSCD 北大核心 2001年第2期113-118,共6页 Chinese Journal of Computers
基金 国家"九七三"项目基金! (G19980 30 5 0 9)资助
关键词 支持向量机 神经网络 学习算法 单纯形迭代算法 support vector machine, programming, neural network, learning algorithm
  • 相关文献

参考文献6

  • 1张铃,张钹,吴福朝.神经网络的规划学习算法[J].计算机学报,1994,17(9):669-675. 被引量:13
  • 2Drucker H,IEEE Trans Neural Networks,1999年,10卷,5期,1048页
  • 3Zhang Ling,IEEE Trans Neural Networks,1999年,10卷,4期,925页
  • 4Amari S,Neural Networks,1999年,12卷,783页
  • 5Zhang L,IEEE Trans Neural Networks,1995年,6期,3页
  • 6Pardalos P M,Linear Algebra Appl,1991年,152卷,1期,69页

二级参考文献1

  • 1张铃,计算机学报,1994年,17卷,9期

共引文献12

同被引文献334

引证文献37

二级引证文献406

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部