期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Fast Learning in Spiking Neural Networks by Learning Rate Adaptation 被引量:2
1
作者 方慧娟 罗继亮 王飞 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2012年第6期1219-1224,共6页
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de... For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN. 展开更多
关键词 spiking neural networks learning algorithm learning rate adaptation Tennessee Eastman process
下载PDF
Tuning the Learning Rate for Stochastic Variational Inference
2
作者 Xi-Ming Li Ji-Hong Ouyang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第2期428-436,共9页
Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This ra... Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates. 展开更多
关键词 stochastic variational inference online learning adaptive learning rate topic model
原文传递
Improving the accuracy of heart disease diagnosis with an augmented back propagation algorithm
3
作者 颜红梅 《Journal of Chongqing University》 CAS 2003年第1期31-34,共4页
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ... A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis. 展开更多
关键词 multilayer perceptron back propagation algorithm heart disease momentum term adaptive learning rate the forgetting mechanics conjugate gradients method
下载PDF
DP-ASSGD: Differential Privacy Protection Based on Stochastic Gradient Descent Optimization
4
作者 Qiang Gao Han Sun Zhifang Wang 《国际计算机前沿大会会议论文集》 EI 2023年第1期298-308,共11页
Recently,differential privacy algorithms based on deep learning have become increasingly mature.Previous studies provide privacy mostly by adding differential privacy noise to the gradient,but it will reduce the accur... Recently,differential privacy algorithms based on deep learning have become increasingly mature.Previous studies provide privacy mostly by adding differential privacy noise to the gradient,but it will reduce the accuracy,and it is difficult to balance privacy and accuracy.In this paper,the DP-ASSGD algo-rithm is proposed to counterpoise privacy and accuracy.The convergence speed is improved,the number of optimized iterations is decreased,and the privacy loss is significantly reduced.On the other hand,by using the postprocessing immunity characteristics of the differential privacy model,the Laplace smoothing mecha-nism is added to make the training process more stable and the generalization ability stronger.The experiment uses the MNIST dataset,with the same privacy budget,and compared with the existing differential privacy algorithms,the accu-racy is improved by 1.8%on average.When achieving the same accuracy,the DP-ASSGD algorithm consumes less privacy budget. 展开更多
关键词 Differential privacy protection learning rate adaptation Laplace smoothing
原文传递
Adam revisited:a weighted past gradients perspective 被引量:2
5
作者 Hui Zhong Zaiyi Chen +4 位作者 Chuan Qin Zai Huang Vincent W.Zheng Tong Xu Enhong Chen 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第5期61-76,共16页
Adaptive learning rate methods have been successfully applied in many fields,especially in training deep neural networks.Recent results have shown that adaptive methods with exponential increasing weights on squared p... Adaptive learning rate methods have been successfully applied in many fields,especially in training deep neural networks.Recent results have shown that adaptive methods with exponential increasing weights on squared past gradients(i.e.,ADAM,RMSPROP)may fail to converge to the optimal solution.Though many algorithms,such as AMSGRAD and ADAMNC,have been proposed to fix the non-convergence issues,achieving a data-dependent regret bound similar to or better than ADAGRAD is still a challenge to these methods.In this paper,we propose a novel adaptive method weighted adaptive algorithm(WADA)to tackle the non-convergence issues.Unlike AMSGRAD and ADAMNC,we consider using a milder growing weighting strategy on squared past gradient,in which weights grow linearly.Based on this idea,we propose weighted adaptive gradient method framework(WAGMF)and implement WADA algorithm on this framework.Moreover,we prove that WADA can achieve a weighted data-dependent regret bound,which could be better than the original regret bound of ADAGRAD when the gradients decrease rapidly.This bound may partially explain the good performance of ADAM in practice.Finally,extensive experiments demonstrate the effectiveness of WADA and its variants in comparison with several variants of ADAM on training convex problems and deep neural networks. 展开更多
关键词 adaptive learning rate methods stochastic gradient descent online learning
原文传递
Speed up Training of the Recurrent Neural Network Based on Constrained optimization Techniques
6
作者 陈珂 包威权 迟惠生 《Journal of Computer Science & Technology》 SCIE EI CSCD 1996年第6期581-588,共8页
In this paper, the constrained optimization technique for a substantial prob-lem is explored, that is accelerating training the globally recurrent neural net-work. Unlike most of the previous methods in feedforward ne... In this paper, the constrained optimization technique for a substantial prob-lem is explored, that is accelerating training the globally recurrent neural net-work. Unlike most of the previous methods in feedforward neuxal networks, the authors adopt the constrained optimization technique to improve the gradiellt-based algorithm of the globally recuxrent neural network for the adaptive learn-ing rate during training. Using the recurrent network with the improved algo-rithm, some experiments in two real-world problems, namely filtering additive noises in acoustic data and classification of temporal signals for speaker identifi-cation, have been performed. The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance. 展开更多
关键词 Recurrent neural network adaptive learning rate gradientbased algorithm
原文传递
Real-time tool condition monitoring method based on in situ temperature measurement and artificial neural network in turning
7
作者 Kaiwei CAO Jinghui HAN +3 位作者 Long XU Tielin SHI Guanglan LIAO Zhiyong LIU 《Frontiers of Mechanical Engineering》 SCIE CSCD 2022年第1期84-98,共15页
Tool failures in machining processes often cause severe damages of workpieces and lead to large quantities of loss,making tool condition monitoring an important,urgent issue.However,problems such as practicability sti... Tool failures in machining processes often cause severe damages of workpieces and lead to large quantities of loss,making tool condition monitoring an important,urgent issue.However,problems such as practicability still remain in actual machining.Here,a real-time tool condition monitoring method integrated in an in situ fiber optic temperature measuring apparatus is proposed.A thermal simulation is conducted to investigate how the fluctuating cutting heats affect the measuring temperatures,and an intermittent cutting experiment is carried out,verifying that the apparatus can capture the rapid but slight temperature undulations.Fourier transform is carried out.The spectrum features are then selected and input into the artificial neural network for classification,and a caution is given if the tool is worn.A learning rate adaption algorithm is introduced,greatly reducing the dependence on initial parameters,making training convenient and flexible.The accuracy stays 90%and higher in variable argument processes.Furthermore,an application program with a graphical user interface is constructed to present real-time results,confirming the practicality. 展开更多
关键词 tool condition monitoring cutting temperature neural network learning rate adaption
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部