摘要
本文提出了一种对称互连神经元网络的学习策略,利用全局约束优化方法确定连接权。优化过程采用了梯度下降技术。这种学习算法可以保证训练样本成为系统的稳定吸引子,并且具有优化意义上的最大吸引域。本文讨论了网络的存储容量,训练样本的渐近稳定性和吸引域大小。计算机实验结果说明了学习算法的优越性。
In the paper, a learning algorithm for symmetric associative memories is examined. Considoring two design criteria, we cast the learning procedure into a constrained minimization, solved by a gradient descent method. The learning approach guarantees storage of each desired pattern with attraction basin as large as possible. We also study storage capacity and the asymptotic stability. Several computer simulations show advantages of the learning algorithm.
出处
《通信学报》
EI
CSCD
北大核心
1992年第5期88-92,共5页
Journal on Communications
关键词
联想记忆模型
约束优化
学习算法
Associative memory, Constrained optimal learning algorithm, Gradient descent rule,Asymptotic stability, Basin of attraction.