This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o...This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
机械钻速(rate of penetration,ROP)是钻井作业优化和减少成本的关键因素,钻井时有效地预测ROP是提升钻进效率的关键。由于井下钻进时复杂多变的情况和地层的非均质性,通过传统的ROP方程和回归分析方法来预测钻速受到了一定的限制。为...机械钻速(rate of penetration,ROP)是钻井作业优化和减少成本的关键因素,钻井时有效地预测ROP是提升钻进效率的关键。由于井下钻进时复杂多变的情况和地层的非均质性,通过传统的ROP方程和回归分析方法来预测钻速受到了一定的限制。为了实现对钻速的高精度预测,对现有BP (back propagation)神经网络进行优化,提出了一种新的神经网络模型,即动态自适应学习率的粒子群优化BP神经网络,利用录井数据建立目标井预测模型来对钻速进行预测。在训练过程中对BP神经网络进行优化,利用启发式算法,即附加动量法和自适应学习率,将两种方法结合起来形成动态自适应学习率的BP改进算法,提高了BP神经网络的训练速度和拟合精度,获得了更好的泛化性能。将BP神经网络与遗传优化算法(genetic algorithm,GA)和粒子群优化算法(particle swarm optimization,PSO)结合,得到优化后的动态自适应学习率BP神经网络。研究利用XX8-1-2井的录井数据进行实验,对比BP神经网络、PSO-BP神经网络、GA-BP神经网络3种不同的改进后神经网络的预测结果。实验结果表明:优化后的PSO-BP神经网络的预测性能最好,具有更高的效率和可靠性,能够有效的利用工程数据,在有一定数据采集量的区域提供较为准确的ROP预测。展开更多
文摘This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.