摘要
文章用径向基神经网络设计内模控制系统。径向基神经网络是通过调整隐层与输出层间的连接权系数来逼近函数,如果隐层神经元数目过少,难免会出现收敛时间长,控制质量差,甚至发散的现象。为此,本文提出了增加调整基函数形状参数和中心向量的万法予以避免,并证明了网络不同调整参量收敛于目标函数极小点的性质。
This paper designs a internal model control system with radial basis function neural networks. By means of this kind of networks, objective functions are approximated by adjusting the weights of those links between the neurons of concealed layers and the output layer. lf the concealed layer contains too small number of neurons, it would probably appear that convergent time is increased, the control efficiency is reduced, even worse, divergence would take place. According to the new approach of this paper, these phenomena can be avoided by adjusting the shape parameters and center vectors of the basis function. It is also proved that through adjusting network parameters, objective function will eventually converge to its mi n imu m value.
出处
《电路与系统学报》
CSCD
1999年第2期86-91,共6页
Journal of Circuits and Systems
关键词
内模控制
高斯函数
神经网络
径向基神经网络
Internal Model Control, Guassian Function Neural Networks, Shape Parameter, Center Vector