摘要
脉冲神经网络(SNN)作为第三代神经网络受到广泛关注,但因离散时间脉冲信号和神经元机制,缺乏有效的训练算法。本文设计了适用于梯度下降的拟脉冲激活函数和脉冲神经元迭代计算模型,以及用于数据转换处理的编码与解码层,在此基础上提出了一种适用于大型SNN的训练算法。与其他训练算法对比,该算法达到了更高的准确性,在MNIST和CIFAR10数据集上分别达到98.81%和70.78%的准确率,与ANN相比分别减少了22.18%和25.61%的功耗,脉冲序列的时间长度最大下降了750倍。
The Spike Neural Network(SNN),as the third-generation neural network,has received widespread attention.But due to the discrete spiking sequence and neuron mechanism,it lacks an effective training algorithm.Thus,an approximate activation func⁃tion suitable for gradient descent has been proposed,and the neuron model is converted into an iterative version.New encoding and de⁃coding layers are designed for data processing.Therefore,a new training algorithm that can be applied to large-scale SNNs has been re⁃alized.And an open source training framework based on Pytorch has been developed.Compared with other training algorithms,this al⁃gorithm achieved higher accuracy,reaching 98.81%and 70.78%accuracy on the MNIST and CIFAR10 data sets,respectively.And compared with ANN,the power consumption is reduced by 22.18%and 25.61%,respectively.The length of the spiking sequence is re⁃duced by a maximum of 750 times.
作者
徐梦遥
Xu Mengyao(Shanghai Jiaotong University,Shanghai 200240)
出处
《现代计算机》
2021年第35期1-11,共11页
Modern Computer
关键词
脉冲神经网络
脉冲神经元模型
反向传播
梯度下降训练
spiking neural network
spiking neuron model
back propagation
gradient descent training