期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Speed up Training of the Recurrent Neural Network Based on Constrained optimization Techniques
1
作者 陈珂 包威权 迟惠生 《Journal of Computer Science & Technology》 SCIE EI CSCD 1996年第6期581-588,共8页
In this paper, the constrained optimization technique for a substantial prob-lem is explored, that is accelerating training the globally recurrent neural net-work. Unlike most of the previous methods in feedforward ne... In this paper, the constrained optimization technique for a substantial prob-lem is explored, that is accelerating training the globally recurrent neural net-work. Unlike most of the previous methods in feedforward neuxal networks, the authors adopt the constrained optimization technique to improve the gradiellt-based algorithm of the globally recuxrent neural network for the adaptive learn-ing rate during training. Using the recurrent network with the improved algo-rithm, some experiments in two real-world problems, namely filtering additive noises in acoustic data and classification of temporal signals for speaker identifi-cation, have been performed. The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance. 展开更多
关键词 Recurrent neural network adaptive learning rate gradientbased algorithm
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部