期刊文献+

平滑l_1模神经网络学习算法

Smoothed l_1-norm learning algorithm for neural networks
下载PDF
导出
摘要 神经网络误差函数的不同构造将导致不同的学习敛速。本文基于递推辨识技术,对于绝对误差指标函数提出了一种新的平滑方法作为改进的神经元权值修正算法,该平滑思想也适用于其它绝对误差指标下的最优化问题。通过网络对Feigenbaum映射和XOR逻辑的学习算例,说明了算法的快速收敛性能。 The fast learning algorithm for feed-forward neural networks can be approached by focusing on constituting optimization criteria which are used for their training. It was recently found that the rate of convergence of the error back propagation algorithm can be accelerated with the least absolute value (l_1-norm) criterion. Based on the recursive identification technique, this paper addresses a new smoothed absolute error objective function,which leads to some fast training algorithm which update the weights of the each neuron in the network in an effective parallel way. The smoothed idea can be used for solving other l_1-norm optimization problems. Its performance is illustrated using examples from Feigenbaum function and XOR logical operation.
作者 孙明轩
出处 《西安工业学院学报》 1995年第4期253-259,共7页 Journal of Xi'an Institute of Technology
基金 国家自然科学基金 69404004
关键词 绝对误差指标 反向传播算法 神经网络 学习算法 absolute error criterion back propagation algorithm recursive identi-fication
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部