期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
L^p Approximation Capability of RBF Neural Networks 被引量:1
1
作者 Dong Nan Wei Wu +2 位作者 jin ling long Yu Mei Ma Lin Jun Sun 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2008年第9期1533-1540,共8页
L p approximation capability of radial basis function (RBF) neural networks is investigated. If g: R +1 → R 1 and $g(\parallel x\parallel _{R^n } )$g(\parallel x\parallel _{R^n } ) ∈ L loc p (R n ) with 1 ≤ p < ... L p approximation capability of radial basis function (RBF) neural networks is investigated. If g: R +1 → R 1 and $g(\parallel x\parallel _{R^n } )$g(\parallel x\parallel _{R^n } ) ∈ L loc p (R n ) with 1 ≤ p < ∞, then the RBF neural networks with g as the activation function can approximate any given function in L p (K) with any accuracy for any compact set K in R n , if and only if g(x) is not an even polynomial. 展开更多
关键词 neural networks radial basis function L p approximation capability
原文传递
L^2(R^d) Approximation Capability of Incremental Constructive Feedforward Neural Networks with Random Hidden Units
2
作者 jin ling long Zheng Xue LI Dong NAN 《Journal of Mathematical Research and Exposition》 CSCD 2010年第5期799-807,共9页
This paper studies approximation capability to L^2(Rd) functions of incremental constructive feedforward neural networks (FNN) with random hidden units. Two kinds of therelayered feedforward neural networks are co... This paper studies approximation capability to L^2(Rd) functions of incremental constructive feedforward neural networks (FNN) with random hidden units. Two kinds of therelayered feedforward neural networks are considered: radial basis function (RBF) neural networks and translation and dilation invariant (TDI) neural networks. In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks, we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in L2 (Rd) to any accuracy. Our result shows given any non-zero activation function g : R+ → R and g(||x||R^d) ∈ L^2(Rd) for RBF hidden units, or any non-zero activation function g(x) ∈ L^2(R^d) for TDI hidden units, the incremental network function fn with randomly generated hidden units converges to any target function in L2 (R^d) with probability one as the number of hidden units n → ∞, if one only properly adjusts the weights between the hidden units and output unit. 展开更多
关键词 APPROXIMATION incremental feedforward neural networks RBF neural networks TDI neural networks random hidden units.
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部