摘要
Resistive Random-Access Memory(ReRAM)based Processing-in-Memory(PIM)frameworks are proposed to accelerate the working process of DNN models by eliminating the data movement between the computing and memory units.To further mitigate the space and energy consumption,DNN model weight sparsity and weight pattern repetition are exploited to optimize these ReRAM-based accelerators.However,most of these works only focus on one aspect of this software/hardware codesign framework and optimize them individually,which makes the design far from optimal.In this paper,we propose PRAP-PIM,which jointly exploits the weight sparsity and weight pattern repetition by using a weight pattern reusing aware pruning method.By relaxing the weight pattern reusing precondition,we propose a similarity-based weight pattern reusing method that can achieve a higher weight pattern reusing ratio.Experimental results show that PRAP-PIM achieves 1.64×performance improvement and 1.51×energy efficiency improvement in popular deep learning benchmarks,compared with the state-of-the-art ReRAM-based DNN accelerators.
基金
partially supported by the National Natural Science Foundation of China(92064008)
the CCF-Huawei Huyanglin Project CCF-HuaweiST2021002
the Open Project Program of Wuhan National Laboratory for Optoelectronics(2022WNLOKF018)
the Shandong Provincial Natural Science Foundation(ZR2022LZH010).