期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Parameters Compressing in Deep Learning 被引量:8
1
作者 Shiming He Zhuozhou Li +3 位作者 yangning tang Zhuofan Liao Feng Li Se-Jung Lim 《Computers, Materials & Continua》 SCIE EI 2020年第1期321-336,共16页
With the popularity of deep learning tools in image decomposition and natural language processing,how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem... With the popularity of deep learning tools in image decomposition and natural language processing,how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem to be solved.These parameters are huge and can be as many as millions.At present,a feasible direction is to use the sparse representation technique to compress the parameter matrix to achieve the purpose of reducing parameters and reducing the storage pressure.These methods include matrix decomposition and tensor decomposition.To let vector take advance of the compressing performance of matrix decomposition and tensor decomposition,we use reshaping and unfolding to let vector be the input and output of Tensor-Factorized Neural Networks.We analyze how reshaping can get the best compress ratio.According to the relationship between the shape of tensor and the number of parameters,we get a lower bound of the number of parameters.We take some data sets to verify the lower bound. 展开更多
关键词 Deep neural network parameters compressing matrix decomposition tensor decomposition
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部