期刊文献+

一类神经网络逼近全实轴上函数:稠密性、复杂性与构造性算法

Approximation of Function Defined on Full Axis of Real by a Class of Neural Networks:Density,Complexity and Constructive Algorithm
下载PDF
导出
摘要 在已有的神经网络逼近研究中,目标函数通常定义在有限区间(或紧集)上.而实际问题中,目标函数往往是定义在全实轴(或无界集)上.文中针对此问题,研究了全实轴上的连续函数的插值神经网络逼近问题.首先,利用构造性方法证明了神经网络逼近的稠密性定理,即可逼近性.其次,以函数的连续模为度量尺度,估计了插值神经网络逼近目标函数的速度.最后,利用数值算例进行仿真实验.文中的工作扩展了神经网络逼近的研究内容,给出了全实轴上连续函数的神经网络逼近的构造性算法,并揭示了网络逼近速度与网络拓扑结构之间的关系. There have been a lot of studies on the approximation of function defined on a bounded interval(or a compact set) by neural networks.However,the target functions are often defined on the full axis of real(or an unbounded set) in practical applications.According to this problem,this paper studied the approximation of target function defined on the full axis of real by a class of interpolation neural networks.Firstly,a theorem on density for approximation by neural networks is given by means of a constructive approach.Secondly,the modulus of continuity of a function is taken to be a metric to estimate the rate of approximating a target function.Lastly,a numerical example for illustration is given.This paper extends the theory of neural networks by giving a constructive algorithm for approximating continuous functions defined on the full axis of real and explores the relation between approximating rate and topological structure of the neural networks.
出处 《计算机学报》 EI CSCD 北大核心 2012年第4期786-795,共10页 Chinese Journal of Computers
基金 国家自然科学基金(60873206 61101240) 浙江省自然科学基金(Y6110117)资助~~
关键词 神经网络 全实轴 逼近 速度 连续模 neural networks full axis of real approximation rate modulus o f continuity
  • 相关文献

参考文献9

二级参考文献98

  • 1陈天平.神经网络及其在系统识别应用中的逼近问题[J].中国科学(A辑),1994,24(1):1-7. 被引量:50
  • 2XU Zongben WANG Jianjun.The essential order of approximation for nearly exponential type neural networks[J].Science in China(Series F),2006,49(4):446-460. 被引量:3
  • 3曹飞龙,张永全,张卫国.单隐层神经网络与最佳多项式逼近[J].数学学报(中文版),2007,50(2):385-392. 被引量:13
  • 4Cybenko G. Approximation by superpositions of a single function[J]. Math Control, Signals & Systems, 1989, 2: 303-314.
  • 5Chen Tianping, Chert Hong, Lin Ruewen. Approximation capability in C(R^n) by multilayer feedforward networks and related problems[J]. IEEE Trans Neural Networks, 1995, 6: 25-30.
  • 6Xu Zongben, Cao Feilong. Simultaneous L^p-approximation order for neural networks[J]. Neural Networks, 2005, 18: 914-923.
  • 7Huang G B, Babri H A. Feedforward neural networks with arbitrary bounded nonlinear activation functions[J]. IEEE Trans Neural Networks, 1998, 9(1): 224-229.
  • 8Shrivatava Y, Dasgupta S. Neural networks for exact matching of functions on a discrete domain[A]. In: Proceedings of the 29th IEEE Conference on Decision and Control[C]. Honolulu, 1990, 1719-1724.
  • 9Ito Y, Saito K. Superposition of linearly independent functions and finite mappings by neural networks[J]. Math Sci, 1996, 21: 27-33.
  • 10Sontag E-D. Feedforward nets for interpolation and classification[J]. J Comput System Sei, 1992, 45: 20-48.

共引文献79

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部