期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Convergence of gradient method for Elman networks
1
作者 吴微 徐东坡 李正学 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2008年第9期1231-1238,共8页
The gradient method for training Elman networks with a finite training sample set is considered. Monotonicity of the error function in the iteration is shown. Weak and strong convergence results are proved, indicating... The gradient method for training Elman networks with a finite training sample set is considered. Monotonicity of the error function in the iteration is shown. Weak and strong convergence results are proved, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. A numerical example is given to support the theoretical findings. 展开更多
关键词 Elman network gradient learning algorithm CONVERGENCE MONOTONICITY
下载PDF
Error Analysis on Hérmite Learning with Gradient Data 被引量:1
2
作者 Baohuai SHENG Jianli WANG Daohong XIANG 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2018年第4期705-720,共16页
This paper deals with Hermite learning which aims at obtaining the target function from the samples of function values and the gradient values. Error analysis is conducted for these algorithms by means of approaches f... This paper deals with Hermite learning which aims at obtaining the target function from the samples of function values and the gradient values. Error analysis is conducted for these algorithms by means of approaches from convex analysis in the frame- work of multi-task vector learning and the improved learning rates are derived. 展开更多
关键词 Hermite learning gradient learning learning rate Convex analysis Multitask learning Differentiable reproducing kernel Hilbert space
原文传递
AN SVAD ALGORITHM BASED ON FNNKD METHOD
3
作者 ChenDong ZhangYan 《Journal of Electronics(China)》 2002年第3期280-288,共9页
The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNK... The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better. 展开更多
关键词 FNNKD VAD Adaptive Multi-Rate (AMR) Heuristic gradient learning Algorithm (HGLA)
下载PDF
Negative effects of sufficiently small initial weights on back-propagation neural networks 被引量:2
4
作者 Yan LIU Jie YANG +1 位作者 Long LI Wei WU 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2012年第8期585-592,共8页
In the training of feedforward neural networks, it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation. The aim of this paper is to point out the other s... In the training of feedforward neural networks, it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation. The aim of this paper is to point out the other side of the story: In some cases, the gradient of the error functions is zero not only for infinitely large weights but also for zero weights. Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights. Therefore, we suggest that, in these cases, the initial values of the weights should be neither too large, nor too small. For instance, a typical range of choices of the initial weights might be something like (-0.4,-0.1) U (0.1,0.4), rather than (-0.1,0.1) as suggested by the usual strategy. Our theory that medium size weights should be used has also been extended to a few commonly used transfer functions and error functions. Numerical experiments are carried out to support our theoretical findings. 展开更多
关键词 Neural networks BACK-PROPAGATION gradient learning method CONVERGENCE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部