期刊文献+

Parameters Compressing in Deep Learning 被引量:9

下载PDF
导出
摘要 With the popularity of deep learning tools in image decomposition and natural language processing,how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem to be solved.These parameters are huge and can be as many as millions.At present,a feasible direction is to use the sparse representation technique to compress the parameter matrix to achieve the purpose of reducing parameters and reducing the storage pressure.These methods include matrix decomposition and tensor decomposition.To let vector take advance of the compressing performance of matrix decomposition and tensor decomposition,we use reshaping and unfolding to let vector be the input and output of Tensor-Factorized Neural Networks.We analyze how reshaping can get the best compress ratio.According to the relationship between the shape of tensor and the number of parameters,we get a lower bound of the number of parameters.We take some data sets to verify the lower bound.
出处 《Computers, Materials & Continua》 SCIE EI 2020年第1期321-336,共16页 计算机、材料和连续体(英文)
基金 This work was supported by National Natural Science Foundation of China(Nos.61802030,61572184) the Science and Technology Projects of Hunan Province(No.2016JC2075) the International Cooperative Project for“Double First-Class”,CSUST(No.2018IC24).
  • 相关文献

同被引文献22

引证文献9

二级引证文献16

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部