摘要
本文对两种非模块型神经网络——全连接和随机连接神经网络在多处理器网络上并行实现的性能进行了分析 ,指出处理器网络的拓扑结构和结点的扇入对这两种神经网络的并行实现的性能影响不大 ,并对神经网络在多处理器网络上并行实现时一个学习周期时间进行了分解 。
The performance of parallel implementation for two kinds of non modular artificial neural networks—fully and random connected artificial neural networks on multiprocessor architecture is analyzed in this paper. It concludes that the topology of the multiprocessor architecture and the fan in of processor have little effective. The iteration learning time of neural networks is discussed, and the maximum speedup and best size of multiprocessor system are calculated in this paper too.
出处
《小型微型计算机系统》
EI
CSCD
北大核心
2000年第5期547-548,共2页
Journal of Chinese Computer Systems
基金
暨南大学 2 11项目"宽带多媒体通信与网络信息系统"经费资助
关键词
非模块型
人工神经网络
并行实现
多处理器
Neural network
Fully connected
Random connected
Parallel implementation
Performance analysis