期刊文献+

Effective Model Compression via Stage-wise Pruning

原文传递
导出
摘要 Automated machine learning(AutoML)pruning methods aim at searching for a pruning strategy automatically to reduce the computational complexity of deep convolutional neural networks(deep CNNs).However,some previous work found that the results of many Auto-ML pruning methods cannot even surpass the results of the uniformly pruning method.In this paper,the ineffectiveness of Auto-ML pruning,which is caused by unfull and unfair training of the supernet,is shown.A deep supernet suffers from unfull training because it contains too many candidates.To overcome the unfull training,a stage-wise pruning(SWP)method is proposed,which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training.Besides,a wide supernet is hit by unfair training since the sampling probability of each channel is unequal.Therefore,the fullnet and the tinynet are sampled in each training iteration to ensure that each channel can be overtrained.Remarkably,the proxy performance of the subnets trained with SWP is closer to the actual performance than that of most of the previous AutoML pruning work.Furthermore,experiments show that SWP achieves the state-of-the-art in both CIFAR-10 and ImageNet under the mobile setting.
出处 《Machine Intelligence Research》 EI CSCD 2023年第6期937-951,共15页 机器智能研究(英文版)
基金 This work was supported by Natural Science Foundation of Zhejiang Province,China(No.LY21F030018) National Key R&D Program of China(No.2018YFB 1308400).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部