期刊文献+

人工神经网络加速方法综述与研究

Review and Research on Artificial Neural Network Acceleration Methods
下载PDF
导出
摘要 针对人工神经网络计算密集型和数据密集型的计算特点,在分析了当前常见的硬件加速架构的基础上,提出了一种可重构众核加速阵列的逻辑结构,包括规则控制层、数据缓存层和乘加算粒层,在数据缓存层上还构建片上网络,实现数据在各处理节点之间的流动。该结构突破了冯诺依曼内存墙的问题,实现了计算存储一体化的近数据计算。 Aiming at the compute-intensive and data-intensive characteristics of artificial neural networks(ANN),based on the analysis of the current common hardware acceleration architecture,a logical structure of reconfigurable many-core acceleration array is proposed,including rule control layer,data cache layer and multiplication and addition algorithm layer.In addition,a network-on-chip(NOC)is also constructed in the data cache layer to implement the data flow between processing nodes.This structure breaks through the problem of von Neumann's memory wall and realizes the near-data-computing(NDC)of computational storage integration.
作者 陶常勇 高彦钊 王元磊 张兴明 TAO Changyong;GAO Yanzhao;WANG Yuanlei;ZHANG Xingming(Tianjin Binhai Information Technology Innovation Center,Tianjin 300450,China;National Digital Switching System Engineering Technology Research Center,Zhengzhou 450000,Henan Province,China;People’s Liberation Army Information Engineering University,Zhengzhou 450000,Henan Province,China)
出处 《天津科技》 2019年第S01期28-30,共3页 Tianjin Science & Technology
关键词 神经网络 众核架构 近数据计算 片上网络 ANN many-core architecture NDC NOC
  • 相关文献

参考文献1

共引文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部