期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
APPROXIMATION CAPABILITIES OF MULTILAYER FEEDFORWARD REGULAR FUZZY NEURAL NETWORKS 被引量:2
1
作者 Liu PuyinDept. of Math., National Univ. of Defence Technology,Changsha 410073 Dept. of Math., Beijing Normal Univ.,Beijing 100875. 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2001年第1期45-57,共13页
Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At f... Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At first,multivariate Bernstein polynomials associated with fuzzy valued functions are empolyed to approximate continuous fuzzy valued functions defined on each compact set of R n . Secondly,by introducing cut preserving fuzzy mapping,the equivalent conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular fuzzy neural networks are shown. Finally a few of sufficient and necessary conditions for characterizing approximation capabilities of regular fuzzy neural networks are obtained. And some concrete fuzzy functions demonstrate our conclusions. 展开更多
关键词 Regular fuzzy neural networks CUT preserving fuzzy mappings universal approximators fuzzy valued Bernstein polynomials.
下载PDF
Fuzzy inference systems with no any rule base and linearly parameter growth 被引量:2
2
作者 ShitongWANC KorrisF.L.CHUNG +2 位作者 JiepingLU BinHAN DewenHU 《控制理论与应用(英文版)》 EI 2004年第2期185-192,共8页
A class of new fuzzy inference systems New-FISs is presented.Compared with the standard fuzzy system, New-FIS is still a universal approximator and has no fuzzy rule base and linearly parameter growth. Thus, it effect... A class of new fuzzy inference systems New-FISs is presented.Compared with the standard fuzzy system, New-FIS is still a universal approximator and has no fuzzy rule base and linearly parameter growth. Thus, it effectively overcomes the second "curse of dimensionality":there is an exponential growth in the number of parameters of a fuzzy system as the number of input variables,resulting in surprisingly reduced computational complexity and being especially suitable for applications,where the complexity is of the first importance with respect to the approximation accuracy. 展开更多
关键词 Fuzzy inference Fuzzy systems universal approximation Computational complexity Linearly parameter growth
下载PDF
Reducing parameter space for neural network training 被引量:1
3
作者 Tong Qin Ling Zhou Dongbin Xiu 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第3期170-181,共12页
For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be traine... For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be trained on the unit sphere,as opposed to the entire space,and the threshold can be trained in a bounded interval,as opposed to the real line.We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space.The reduced parameter space shall facilitate the optimization procedure for the network training,as the search space becomes(much)smaller.We demonstrate the improved training performance using numerical examples. 展开更多
关键词 Rectified linear unit network universal approximator Reduced space
下载PDF
Length-Changeable Incremental Extreme Learning Machine 被引量:2
4
作者 You-Xi Wu Dong Liu He Jiang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第3期630-643,共14页
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (... Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms. 展开更多
关键词 single-hidden-layer feed-forward network (SLFN) incremental extreme learning machine (I-ELM) random hidden node convergence rate universal approximation
原文传递
A novel fuzzy neural network and its approximation capability 被引量:2
5
作者 刘普寅 《Science in China(Series F)》 2001年第3期184-194,共11页
The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural networ... The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural network model with succinct learning algorithms. Such a system possesses some universal approximation capabilities, that is, the corresponding three layer feedforward fuzzy neural networks can be universal approximators to the continuously increasing fuzzy functions. 展开更多
关键词 n polygonal fuzzy number fuzzy neural network universal approximator fuzzy arithmetic.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部