期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A revisit to MacKay algorithm and its application to deep network compression 被引量:1
1
作者 Chune LI yongyi mao +1 位作者 Richong ZHANG Jinpeng HUAI 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第4期39-54,共16页
An iterative procedure introduced in MacKay’s evidence framework is often used for estimating the hyperparameter in empirical Bayes.Together with the use of a particular form of prior,the estimation of the hyperparam... An iterative procedure introduced in MacKay’s evidence framework is often used for estimating the hyperparameter in empirical Bayes.Together with the use of a particular form of prior,the estimation of the hyperparameter reduces to an automatic relevance determination model,which provides a soft way of pruning model parameters.Despite the effectiveness of this estimation procedure,it has stayed primarily as a heuristic to date and its application to deep neural network has not yet been explored.This paper formally investigates the mathematical nature of this procedure and justifies it as a well-principled algorithm framework,which we call the MacKay algorithm.As an application,we demonstrate its use in deep neural networks,which have typically complicated structure with millions of parameters and can be pruned to reduce the memory requirement and boost computational efficiency.In experiments,we adopt MacKay algorithm to prune the parameters of both simple networks such as LeNet,deep convolution VGG-like networks,and residual netowrks for large image classification task.Experimental results show that the algorithm can compress neural networks to a high level of sparsity with little loss of prediction accuracy,which is comparable with the state-of-the-art. 展开更多
关键词 deep learning MacKay algorithm model compression neural network
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部