By the best approximation theory, it is first proved that the SISO (single-input single-output) linear Takagi-Sugeno (TS) fuzzy systems can approximate an arbitrary polynomial which, according to Weierstrass appro...By the best approximation theory, it is first proved that the SISO (single-input single-output) linear Takagi-Sugeno (TS) fuzzy systems can approximate an arbitrary polynomial which, according to Weierstrass approximation theorem, can uniformly approximate any continuous functions on the compact domain. Then new sufficient conditions for general linear SISO TS fuzzy systems as universal approximators are obtained. Formulae are derived to calculate the number of input fuzzy sets to satisfy the given approximation accuracy. Then the presented result is compared with the existing literature's results. The comparison shows that the presented result needs less input fuzzy sets, which can simplify the design of the fuzzy system, and examples are given to show its effectiveness.展开更多
Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At f...Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At first,multivariate Bernstein polynomials associated with fuzzy valued functions are empolyed to approximate continuous fuzzy valued functions defined on each compact set of R n . Secondly,by introducing cut preserving fuzzy mapping,the equivalent conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular fuzzy neural networks are shown. Finally a few of sufficient and necessary conditions for characterizing approximation capabilities of regular fuzzy neural networks are obtained. And some concrete fuzzy functions demonstrate our conclusions.展开更多
A class of new fuzzy inference systems New-FISs is presented.Compared with the standard fuzzy system, New-FIS is still a universal approximator and has no fuzzy rule base and linearly parameter growth. Thus, it effect...A class of new fuzzy inference systems New-FISs is presented.Compared with the standard fuzzy system, New-FIS is still a universal approximator and has no fuzzy rule base and linearly parameter growth. Thus, it effectively overcomes the second "curse of dimensionality":there is an exponential growth in the number of parameters of a fuzzy system as the number of input variables,resulting in surprisingly reduced computational complexity and being especially suitable for applications,where the complexity is of the first importance with respect to the approximation accuracy.展开更多
For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be traine...For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be trained on the unit sphere,as opposed to the entire space,and the threshold can be trained in a bounded interval,as opposed to the real line.We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space.The reduced parameter space shall facilitate the optimization procedure for the network training,as the search space becomes(much)smaller.We demonstrate the improved training performance using numerical examples.展开更多
The approximation capability of regular fuzzy neural networks to fuzzy functions is studied. When σ is a nonconstant, bounded and continuous function of $\mathbb{R}$ , some equivalent conditions are obtained, with wh...The approximation capability of regular fuzzy neural networks to fuzzy functions is studied. When σ is a nonconstant, bounded and continuous function of $\mathbb{R}$ , some equivalent conditions are obtained, with which continuous fuzzy functions can be approximated to any degree of accuracy by the four-layer feedforward regular fuzzy neural networks $\sum\limits_{k = 1}^q {\tilde W_k } \cdot \left( {\sum\limits_{j = 1}^p {\tilde V_{kj} \cdot \sigma (\tilde X \cdot \tilde U_j + \tilde \Theta _j )} } \right)$ . Finally a few examples of such fuzzy functions are given.展开更多
The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural networ...The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural network model with succinct learning algorithms. Such a system possesses some universal approximation capabilities, that is, the corresponding three layer feedforward fuzzy neural networks can be universal approximators to the continuously increasing fuzzy functions.展开更多
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (...Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.展开更多
文摘By the best approximation theory, it is first proved that the SISO (single-input single-output) linear Takagi-Sugeno (TS) fuzzy systems can approximate an arbitrary polynomial which, according to Weierstrass approximation theorem, can uniformly approximate any continuous functions on the compact domain. Then new sufficient conditions for general linear SISO TS fuzzy systems as universal approximators are obtained. Formulae are derived to calculate the number of input fuzzy sets to satisfy the given approximation accuracy. Then the presented result is compared with the existing literature's results. The comparison shows that the presented result needs less input fuzzy sets, which can simplify the design of the fuzzy system, and examples are given to show its effectiveness.
基金This work was supported by National Natural Science Foundation(699740 4 1 699740 0 6)
文摘Four layer feedforward regular fuzzy neural networks are constructed. Universal approximations to some continuous fuzzy functions defined on F 0 (R) n by the four layer fuzzy neural networks are shown. At first,multivariate Bernstein polynomials associated with fuzzy valued functions are empolyed to approximate continuous fuzzy valued functions defined on each compact set of R n . Secondly,by introducing cut preserving fuzzy mapping,the equivalent conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular fuzzy neural networks are shown. Finally a few of sufficient and necessary conditions for characterizing approximation capabilities of regular fuzzy neural networks are obtained. And some concrete fuzzy functions demonstrate our conclusions.
基金This work was supported by the RGC Competitive Earmarked Research Grant (No. PolyU 5065/98E)Natural Science Foundation of China (No. 60225015)+1 种基金Natural Science Foundation of Jiangsu Province (No. BK2003017)National Key Labruary of Novel Software Tech
文摘A class of new fuzzy inference systems New-FISs is presented.Compared with the standard fuzzy system, New-FIS is still a universal approximator and has no fuzzy rule base and linearly parameter growth. Thus, it effectively overcomes the second "curse of dimensionality":there is an exponential growth in the number of parameters of a fuzzy system as the number of input variables,resulting in surprisingly reduced computational complexity and being especially suitable for applications,where the complexity is of the first importance with respect to the approximation accuracy.
文摘For neural networks(NNs)with rectified linear unit(ReLU)or binary activation functions,we show that their training can be accomplished in a reduced parameter space.Specifically,the weights in each neuron can be trained on the unit sphere,as opposed to the entire space,and the threshold can be trained in a bounded interval,as opposed to the real line.We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space.The reduced parameter space shall facilitate the optimization procedure for the network training,as the search space becomes(much)smaller.We demonstrate the improved training performance using numerical examples.
基金Project supported by the National Natural Science Foundation of China (Grant No. 19601012).
文摘The approximation capability of regular fuzzy neural networks to fuzzy functions is studied. When σ is a nonconstant, bounded and continuous function of $\mathbb{R}$ , some equivalent conditions are obtained, with which continuous fuzzy functions can be approximated to any degree of accuracy by the four-layer feedforward regular fuzzy neural networks $\sum\limits_{k = 1}^q {\tilde W_k } \cdot \left( {\sum\limits_{j = 1}^p {\tilde V_{kj} \cdot \sigma (\tilde X \cdot \tilde U_j + \tilde \Theta _j )} } \right)$ . Finally a few examples of such fuzzy functions are given.
基金The author would like to thank Professor H. Wang for helpful suggestions This work was supported by the National Natural Science Foundation of China( Grants Nos. 69974006 and 69974041) .
文摘The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural network model with succinct learning algorithms. Such a system possesses some universal approximation capabilities, that is, the corresponding three layer feedforward fuzzy neural networks can be universal approximators to the continuously increasing fuzzy functions.
基金This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 61673159 and 61370144, and the Natural Science Foundation of Hebei Province of China under Grant No. F2016202145.
文摘Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.