期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Deep Model Compression for Mobile Platforms:A Survey 被引量:6
1
作者 kaiming nan Sicong Liu +1 位作者 Junzhao Du Hui Liu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2019年第6期677-693,共17页
Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data ... Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data analysis. In this paper, we first summarize the layer compression techniques for the state-of-theart deep learning model from three categories: weight factorization and pruning, convolution decomposition, and special layer architecture designing. For each category of layer compression techniques, we quantify their storage and computation tunable by layer compression techniques and discuss their practical challenges and possible improvements. Then, we implement Android projects using TensorFlow Mobile to test these 10 compression methods and compare their practical performances in terms of accuracy, parameter size, intermediate feature size,computation, processing latency, and energy consumption. To further discuss their advantages and bottlenecks,we test their performance over four standard recognition tasks on six resource-constrained Android smartphones.Finally, we survey two types of run-time Neural Network(NN) compression techniques which are orthogonal with the layer compression techniques, run-time resource management and cost optimization with special NN architecture,which are orthogonal with the layer compression techniques. 展开更多
关键词 DEEP learning MODEL compression run-time RESOURCE management COST optimization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部