摘要
在众多实现人工智能的技术中,神经网络是当今最成功也是应用最广泛的技术。然而,在神经网络的设计越来越复杂的今天,不断增加的功耗也是限制其发展的一个重要因素。其中,卷积计算的功耗占比极大,可达90%。考虑到神经网络中的计算并不需要完全精确的特点,本文设计了适用于神经网络的近似乘法器电路,以达到降低功耗的目的。
Among the many artificial intelligence technologies, neural network is the most successful and widely used technology today. With the advent of deep learning, the development of neural network technology is also flourishing. However, with the increasingly complex design of neural networks, increasing power consumption is also an important factor limiting its development. Among them, the power consumption of convolution calculation accounts for up to 90%. Considering that the calculation in neural network does not need to be completely accurate, an approximate multiplier suitable for neural network is designed in this thesis, in order to achieve the purpose of reducing power consumption.
作者
祁琛
Qi Chen(Southeast University, Jiangsu, 21009)
出处
《电子技术(上海)》
2018年第3期45-49,44,共6页
Electronic Technology
关键词
低功耗
神经网络
近似乘法器
Low power
Neural network
Approximate multiplier