期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Optimizing deep learning inference on mobile devices with neural network accelerators
1
作者 Zeng Xi Xu Yunlong Zhi Tian 《High Technology Letters》 EI CAS 2019年第4期417-425,共9页
Deep learning has now been widely used in intelligent apps of mobile devices.In pursuit of ultra-low power and latency,integrating neural network accelerators(NNA)to mobile phones has become a trend.However,convention... Deep learning has now been widely used in intelligent apps of mobile devices.In pursuit of ultra-low power and latency,integrating neural network accelerators(NNA)to mobile phones has become a trend.However,conventional deep learning programming frameworks are not well-developed to support such devices,leading to low computing efficiency and high memory-occupation.To address this problem,a 2-stage pipeline is proposed for optimizing deep learning model inference on mobile devices with NNAs in terms of both speed and memory-footprint.The 1 st stage reduces computation workload via graph optimization,including splitting and merging nodes.The 2 nd stage goes further by optimizing at compilation level,including kernel fusion and in-advance compilation.The proposed optimizations on a commercial mobile phone with an NNA is evaluated.The experimental results show that the proposed approaches achieve 2.8×to 26×speed up,and reduce the memory-footprint by up to 75%. 展开更多
关键词 machine learning inference neural network accelerator(NNA) low latency kernel fusion in-advance compilation
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部