摘要
经典 NARX 回归神经网络在应用时需要确定输入和输出的延时阶数、隐层神经元数目等三个结构参数,给神经网络的设计造成了很大的困难。为了克服这一缺陷,根据生物神经元机能提出了一种新的动态神经元模型,并将其用于经典 NARX 回归神经网络中,形成了一种改进的 NARX 回归神经网络。新的神经网络在应用时只需确定隐层神经元数目,从而简化了神经网络的结构设计。本文还进一步从理论上分析了该神经网络与经典 NARX 回归神经网络的等效关系,并用李雅普诺夫(Lyapunov)稳定性原理证明了该网络的稳定性。仿真试验表明,新的神经网络不仅辨识能力优于经典 NARX 回归神经网络,而且泛化能力得到了明显提高。
A classic NARX recurrent neural network has three indeterminate constructive parameters: exogenous input delays, output feedback delays and the number of hidden layer neurons. In order to simplify the design of the architecture of the neural network, a new improved NARX recurrent neural netwok, also called DAFNN(Dynamic Activation Function Neural Network), was presented. DAFNN have only one uncertain constructive parameter, which is the number of hidden layer neurons. The neurons in hidden layer of DAFNN were new dynamic neurons, which had dynamic activation functions. In each time t, a DAFNN could be converted to a classic NARX recurrent neural network, which had t delays of exogenous input and output feedback. The DAFNN was proved to be stable through Lyapunov theorem. Simulation results show the performance of a DAFNN was better than that of a classic NARX recurrent neural network in nonliear systems identification, and the generalization ability was significant improved in DAFNNs.
出处
《电气自动化》
北大核心
2006年第4期6-8,11,共4页
Electrical Automation
基金
河南省自然科学基金项目(0122050500)
河南省教育厅自然科学基金项目(200015200010)