摘要
本文提出的算法是利用凸函数共轭性质中的Young不等式构造优化目标函数,这个优化目标函数对于权值和隐层输出来说为凸函数,不存在局部最小。首先把隐层输出做为变量进行优化更新,然后快速计算出隐层前后的权值。数值实验表明:此算法简单,收敛速度快,泛化能力强,并能大大降低学习误差。
This paper presents a new algorithm for feeding forward neural networks based on a new optimal target function constructed according to Young inequality in the conjugate of convex function. The optimal target function is a convex function to the weight values between neurons in different layers and outputs of hidden layers First, outputs of hidden layers can be optimized with convex optimization method, and then the weight values are optimized rapidly. The numerical experiments show that the new algorithm is very simple, and it can accelerate the convergence rate, and strength the generalization capability and reduce the error.
出处
《通讯和计算机(中英文版)》
2005年第4期39-41,49,共4页
Journal of Communication and Computer
基金
本文得到中国科学院研究生院院长基金(基金编号:YZIJ200206),以及中国科学院计算技术研究所智能信息处理实验实开放基金支持(基金编号:IIP200304).
关键词
前向神经网络
凸优化理论
分层优化算法
Feed Forward Neural Networks
Convex Function
Optimizing Layer by Layer