摘要
Dynamic neural network(NN)techniques are increasingly important because they facilitate deep learning techniques with more complex network architectures.However,existing studies,which predominantly optimize the static computational graphs by static scheduling methods,usually focus on optimizing static neural networks in deep neural network(DNN)accelerators.We analyze the execution process of dynamic neural networks and observe that dynamic features introduce challenges for efficient scheduling and pipelining in existing DNN accelerators.We propose DyPipe,a holistic approach to optimizing dynamic neural network inferences in enhanced DNN accelerators.DyPipe achieves significant performance improvements for dynamic neural networks while it introduces negligible overhead for static neural networks.Our evaluation demonstrates that DyPipe achieves 1.7x speedup on dynamic neural networks and maintains more than 96%performance for static neural networks.
作者
庄毅敏
胡杏
陈小兵
支天
Yi-Min Zhuang;Xing Hu;Xiao-Bing Chen;Tian Zhi(State Key Laboratory of Processors,Institute of Computing Technology,Chinese Academy of Sciences Beijing 100190,China;University of Chinese Academy of Sciences,Beijing 100049,China)
基金
supported by the Beijing Natural Science Foundation under Grant No.JQ18013
the National Natural Science Foundation of China under Grant Nos.61925208,61732007,61732002 and 61906179
the Strategic Priority Research Program of Chinese Academy of Sciences(CAS)under Grant No.XDB32050200
the Youth Innovation Promotion Association CAS,Beijing Academy of Artificial Intelligence(BAAI)and Xplore Prize.