期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS 被引量:3
1
作者 邵红梅 吴微 李峰 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 2005年第1期87-96,共10页
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, a... Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results. 展开更多
关键词 前馈神经网络系统 收敛 随机变量 单调性 有界性原理 在线梯度计算法
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部