摘要
在分析基于牛顿前向插值公式的联想记忆系统(NFI-AMS)的迭代过程的基础上,给出了该算法收敛的充分必要条件。并指出该算法的收敛性与被逼近函数无关,而只与逼近多项式的次数及逼近区域的剖分方式和样本点的选取方式有关。数值模拟表明,对相同的分段数和采样方式来说,随着逼近多项式次数的增加,G的谱半径将逐渐增大并最终超过1。故NFI-AMS算法适用于低次分段多项式逼近。
Analyses the iterative process of the training algorithm based associative memory system via Newton interpolation formula,its convergence condition is given. It points out that the convergence has nothing to do with the approximated function but is only concerned with the degree of approximating polynomials, the dissection of the approximated region and the samples′ distribution. Without knowing the samples′ value, we can distinguish this algorithm′s convergence by using the spectral radius of the given matrix G. If the same dissection of the approximated region and the numbers of samples are selected, the spectral radius of the matrix G will be accreted and finally larger than one along with the increase of the degree of the approximating function. So this algorithm can only be used in lower degree polynomial approximation.
出处
《数据采集与处理》
EI
CSCD
1999年第1期13-17,共5页
Journal of Data Acquisition and Processing
基金
国家自然科学基金
关键词
联想记忆系统
学习算法
收敛性
人工神经网络
algorithms
convergence
Newton′s interpolation
training
associative memory system