摘要
本文在一类称为一般存储器神经网络(General Memory Neural Network(GMNN))的统一框架下来研究学习收敛性。该一般模型类的结构由三部分组成:输入空间量化、存储器地址产生器、查表或某种组合输出。当产生的地址是固定有限的个数以及网络输出是线性求和时,可以证明GMNN能在最小平方误差意义下收敛。CMAC(Cerebellar Model Articulation Controller)、SLLUP(Single-Layer Lookup Perceptrons)是该类模型的典型代表。本文的意义在于为构造新的基于局部学习的神经网络模型提供理论指导,最后给出了这种构造的两个例子——SDM(Sparse Distributed Memory)和SLLUP的两个推广模型。
In this paper, we will concentrate on learning convergence of a class of neural network architectures named general memory neural network (GMNN) that consists of: input space quantization, memory address generator, combination output by memory lookup operations. If the number of generated addresses is fixed the output of network is given by summation operator, the learning convergence of GMNN to the least square error can be proved. Both CMAC(Cerebellar Model Articulation Controller) and SLLUP( Single-Layer Lookup Perccptrons)are examples of GMNN. The main purpose of this paper is that it can provide a theoretical instruction about how to construct a new neural network model with local learning. Finally two constructive examples, generalized SDM (Sparse Distributed Memory)and generalized SLLUP models ,are given.
出处
《计算机科学》
CSCD
北大核心
2004年第1期118-121,132,共5页
Computer Science
基金
国家自然科学基金(编号:69973021)