期刊文献+

大规模类脑神经网络理论与方法 被引量:1

Theories and methods for large-scale brain-inspired neural networks
原文传递
导出
摘要 脑启发的脉冲神经网络被称为第三代人工神经网络,通过模拟神经动力学、事件驱动等计算特性捕捉时序信息和节能高效地进行计算,为人工智能领域的发展提供了新范式.大脑惊人的信息处理能力很大程度上归功于其庞大的网络规模和复杂的网络连接.构建大规模类脑神经网络为脑启发式的人工智能、神经形态计算以及多应用领域带来了突破性的进展.本文首先根据现有的研究,分类介绍了脉冲神经元模型、大规模脉冲神经网络模型与算法、深度训练框架和神经形态芯片等3个方面的计算原理和最新的研究进展,指出了目前大规模类脑神经网络研究的进展和存在的问题,随后重点论述了大规模类脑网络的神经形态视觉应用,包括神经形态视觉重构、极端场景目标检测等.最后,在总结已有研究成果的基础上,对该领域的研究现状给出了若干结论,同时指出了仍然存在的一些问题,并对未来研究的需求、期待与发展趋势进行了展望. The brain’s structure and functions have consistently inspired the development of intelligent technologies throughout human history.Brain-inspired spiking neural networks(SNNs)emulate neural dynamics by incorporating event-based computations to capture temporal information,thereby enabling brain-like and energy-efficient computations.Currently,SNNs have been successfully applied in various fields,such as image recognition and object detection,and have the potential to revolutionize the field of artificial intelligence.Similar to the brain,artificial models show emergent properties,such as high performance and intelligence,with sufficiently large size.However,the sizes of SNNs have been limited,which hinders potential performance improvements and thus their real-world application.Therefore,methods and theories for constructing large-scale SNNs must be developed to achieve their highest potential.New techniques for building largescale SNNs have recently attracted extensive research interest.In this review,notable advances and developments with large-scale SNNs,including spiking neuron,algorithm and model structure designs,software frameworks and neuromorphic computing chips,and neuromorphic applications are summarized.Various strategies including parallel computing,surrogate gradients,and transformer-based SNNs are discussed in detail.Specifically,various types of spiking neurons have been developed,with parametric neurons improving the representation ability of SNNs and parallel spiking neurons accelerating the training process.The direct training method is notable for enabling the training of large-scale SNNs,while brain-inspired methods such as plasticity,attention,and pruning algorithms improve the robustness and energy efficiency of SNNs.The ResNet structure and its variants have led to the development of large-scale SNNs,and the emergence of transformer-based SNNs has significantly improved SNN performance on computer vision tasks.Programming frameworks have evolved from solely supporting the inference of large-scale networks to facilitating training,including automatic differentiation,parallel computation acceleration,and implementation on neuromorphic sensors and chips.Large SNNs are particularly well-suited for processing events/spikes collected by neuromorphic sensors,especially in image reconstruction and object detection,but their development is still in the early stages.Furthermore,existing challenges and deficiencies are discussed,providing guidance for the future development of large-scale SNN models.In summary,brain-inspired designs of models and neuromorphic chips are still limited to simplified simulations,lacking sufficient biological plausibility.The temporal dynamic representation capabilities of SNNs have not yet been fully demonstrated.The computations of large-scale SNNs must be accelerated while reducing their power consumption.Moreover,tasks such as natural language processing,reasoning,and decision-making and computer vision tasks must both be investigated.The problem of efficiently measuring and representing asynchronous spatiotemporal events/spike data remains unresolved,limiting the full potential of existing neuromorphic vision systems.In conclusion,large-scale braininspired neural networks can still be substantially improved.Efforts are needed in various research directions,including algorithms,computing frameworks,and architectures,to explore and optimize brain-like models and hardware in a collaborative manner.
作者 马征宇 田永鸿 Zhengyu Ma;Yonghong Tian(Peng Cheng Laboratory,Shenzhen 518000,China;School of Computer Science,Peking University,Beijing 100871,China;School of Electronic and Computer Engineering,Peking University,Shenzhen 518055,China)
出处 《科学通报》 EI CAS CSCD 北大核心 2023年第35期4764-4781,共18页 Chinese Science Bulletin
基金 国家杰出青年科学基金(61825101) 国家自然科学基金(62027804,62088102,62206141,62236009)资助。
关键词 大规模脉冲神经网络 脉冲神经元 神经形态芯片 深度训练框架 神经形态传感器 large-scale spiking neural networks spiking neuron model neuromorphic chips deep learning framework for spiking neural networks neuromorphic sensors
  • 相关文献

参考文献3

二级参考文献6

  • 1Furber S B, Galluppi F, Temple S, et al. The spinnaker project. Proc IEEE, 2014, 102: 652-665.
  • 2Beyeler M, Carlson K D, Chou T S, et al. CARLsim 3: a user-friendly and highly optimized library for the creation of neurobiologically detailed spiking neural networks. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), Killarney, 2015. 1-8.
  • 3Merolla P A, Arthur J V, Alvarez-Icaza R, et al. A million spiking-neuron integrated circuit with a scalable commu- nication network and interface. Science, 2014, 345: 668-673.
  • 4Qiao N, Mostafa H, Corradi F, et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128 K synapses. Front Neurosci, 2015, 9: 141.
  • 5Dayan P, Abbott L F. Theoretical Neuroscience. Cambridge: MIT Press, 2001. 11-52.
  • 6Neil D, Liu S C. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Trans Very Large Scale Integr Syst, 2014, 22: 2621-2628.

共引文献29

同被引文献9

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部