摘要
在已有的神经网络逼近研究中,目标函数通常定义在有限区间(或紧集)上.而实际问题中,目标函数往往是定义在全实轴(或无界集)上.文中针对此问题,研究了全实轴上的连续函数的插值神经网络逼近问题.首先,利用构造性方法证明了神经网络逼近的稠密性定理,即可逼近性.其次,以函数的连续模为度量尺度,估计了插值神经网络逼近目标函数的速度.最后,利用数值算例进行仿真实验.文中的工作扩展了神经网络逼近的研究内容,给出了全实轴上连续函数的神经网络逼近的构造性算法,并揭示了网络逼近速度与网络拓扑结构之间的关系.
There have been a lot of studies on the approximation of function defined on a bounded interval(or a compact set) by neural networks.However,the target functions are often defined on the full axis of real(or an unbounded set) in practical applications.According to this problem,this paper studied the approximation of target function defined on the full axis of real by a class of interpolation neural networks.Firstly,a theorem on density for approximation by neural networks is given by means of a constructive approach.Secondly,the modulus of continuity of a function is taken to be a metric to estimate the rate of approximating a target function.Lastly,a numerical example for illustration is given.This paper extends the theory of neural networks by giving a constructive algorithm for approximating continuous functions defined on the full axis of real and explores the relation between approximating rate and topological structure of the neural networks.
出处
《计算机学报》
EI
CSCD
北大核心
2012年第4期786-795,共10页
Chinese Journal of Computers
基金
国家自然科学基金(60873206
61101240)
浙江省自然科学基金(Y6110117)资助~~
关键词
神经网络
全实轴
逼近
速度
连续模
neural networks
full axis of real
approximation
rate
modulus o f continuity