摘要
针对目前大多数存内计算无法独立处理非卷积计算的问题,提出了一种将转置8T单元与基于向量的位串行存内运算相结合的通用混合型存内计算.采用原码一位乘、补码加法和溢出激活处理,可支持任意位宽的整数/小数及正/负数的乘累加操作,也可单独完成池化和激活操作,为从神经网络到信号处理等软件算法的发展提供了必要的灵活性和可编程性,减少了数据在总线上的传输.提出的存内计算在1.2V和500MHz条件下对8位运算的吞吐量为71.3GOPs,能效为20.63TOPS/W,支持灵活位宽的卷积操作,同时减少了数据移动,提高了能效和整体性能.
Possessing a problem of unable non-convolution computation independently in most computing inmemorys(CIMs),a general-purpose hybrid CIM was proposed based on combining the transposed 8T cells with vector-based bit-serial in-memory operations.The proposed system was designed to facilitate multiply-and-accumulate(MAC)operations for integers/decimals and positive/negative numbers of varying bit widths by utilizing the primary multiplication,complement addition and overflow activation.It also was arranged to support pooling and activation operations separately,offering the essential flexibility and programmability for the development of various software algorithms ranging from neural networks to signal processing,and minimizing data transmission over the bus.Results show that the proposed CIM method can provide a throughput of 71.3 GOPs and an energy efficiency of 20.63 TOPS/W for 8-bit operations under the condition of at 1.2 V and 500 MHz,supporting convolutional operations with flexible bit-widths,reducing data shifts and improving energy efficiency and overall performance.
作者
徐伟栋
娄冕
李立
张凯
龚龙庆
XU Weidong;LOU Mian;LI Li;ZHANG Kai;GONG Longqing(Xi’an Microelectronics Technology Institute,Xi’an,Shannxi 710054,China)
出处
《北京理工大学学报》
EI
CAS
CSCD
北大核心
2024年第10期1095-1104,共10页
Transactions of Beijing Institute of Technology
基金
军科委技术探索项目(222-CXCY-M05-01-01-01)
航天青年拔尖人才基金(YY2022-015)。
关键词
存内计算
深度神经网络
静态随机存取存储器
能效
computing in-memory
deep neural networks
static random access memory
energy efficiency