摘要
神经网络训练过程中的高昂计算代价是有待克服的一个主要困难。作者把前馈多层神经网络的相继各层看做流水线的相继步骤,从而提出了一个在MIMD机器上实现的并行BP算法来提高误差反传递算法的效率。文章的最后,对BP算法的并行实现进行了分析,理论分析结果显示,多种神经网络结构都可有效地并行化。
The high computational cost in the training process of neural networks is a major inconvenience.The main purpose of this paper is to consider the successive layers of a multilayer feedforward neural network as the stages of a pipeline and develop a parallel BP algorithm which is used to improve the efficiency of the error backpropagation algorithm on an MIMD computer. An analysis of the parallel implementation of the BP algorithm is also presented. The theoretical analytic expressions show that the parallelization is efficient for many network architectures.
出处
《北方交通大学学报》
CSCD
北大核心
1995年第4期544-548,共5页
Journal of Northern Jiaotong University
关键词
神经网络
BP算法
MIMD机器
流水化
ss:artificial neural network
BP algorithm
MIMD architecture/pipelined