期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
一种超低损失的深度神经网络量化压缩方法 被引量:5
1
作者 龚成 卢冶 +3 位作者 代素蓉 刘方鑫 陈新伟 李涛 《软件学报》 EI CSCD 北大核心 2021年第8期2391-2407,共17页
深度神经网络(deep neural network,简称DNN)量化是一种高效的模型压缩方法,使用少量位宽表示模型计算过程中的参数和中间结果数据.数据位宽会直接影响内存占用、计算效率和能耗.以往的模型量化研究缺乏有效的定量分析,这导致量化损失... 深度神经网络(deep neural network,简称DNN)量化是一种高效的模型压缩方法,使用少量位宽表示模型计算过程中的参数和中间结果数据.数据位宽会直接影响内存占用、计算效率和能耗.以往的模型量化研究缺乏有效的定量分析,这导致量化损失难以预测.提出了一种超低损失的DNN量化方法(ultra-low loss quantization,简称μL2Q),以揭示量化位宽与量化损失之间的内在联系,指导量化位宽选择并降低量化损失.首先,将原始数据映射为标准正态分布的数据;然后,在等宽的量化区间中搜索最优量化参数;最后,将μL2Q方法融合进DNN的训练过程,并嵌入到主流的机器学习框架Caffe及Keras中,以支撑端到端模型压缩的设计和训练.实验结果表明,与最新的研究方法相比,在相同的位宽条件下,μL2Q方法能够保证更高的模型精度,在典型的神经网络模型上精度分别提高了1.94%,3.73%和8.24%.显著性物体检测实验结果表明,μL2Q方法能够胜任复杂的计算机视觉任务. 展开更多
关键词 神经网络压缩 神经网络量化 权值分布 均匀量化 量化损失最优解
下载PDF
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
2
作者 龚成 卢冶 +3 位作者 代素蓉 邓倩 杜承昆 李涛 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第2期401-420,共20页
Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain expert... Exploring the expected quantizing scheme with suitable mixed-precision policy is the key to compress deep neural networks(DNNs)in high efficiency and accuracy.This exploration implies heavy workloads for domain experts,and an automatic compression method is needed.However,the huge search space of the automatic method introduces plenty of computing budgets that make the automatic process challenging to be applied in real scenarios.In this paper,we propose an end-to-end framework named AutoQNN,for automatically quantizing different layers utilizing different schemes and bitwidths without any human labor.AutoQNN can seek desirable quantizing schemes and mixed-precision policies for mainstream DNN models efficiently by involving three techniques:quantizing scheme search(QSS),quantizing precision learning(QPL),and quantized architecture generation(QAG).QSS introduces five quantizing schemes and defines three new schemes as a candidate set for scheme search,and then uses the Differentiable Neural Architecture Search(DNAS)algorithm to seek the layer-or model-desired scheme from the set.QPL is the first method to learn mixed-precision policies by reparameterizing the bitwidths of quantizing schemes,to the best of our knowledge.QPL optimizes both classification loss and precision loss of DNNs efficiently and obtains the relatively optimal mixed-precision model within limited model size and memory footprint.QAG is designed to convert arbitrary architectures into corresponding quantized ones without manual intervention,to facilitate end-to-end neural network quantization.We have implemented AutoQNN and integrated it into Keras.Extensive experiments demonstrate that AutoQNN can consistently outperform state-of-the-art quantization.For 2-bit weight and activation of AlexNet and ResNet18,AutoQNN can achieve the accuracy results of 59.75%and 68.86%,respectively,and obtain accuracy improvements by up to 1.65%and 1.74%,respectively,compared with state-of-the-art methods.Especially,compared with the full-precision AlexNet and ResNet18,the 2-bit models only slightly incur accuracy degradation by 0.26%and 0.76%,respectively,which can fulfill practical application demands. 展开更多
关键词 automatic quantization mixed precision quantizing scheme search quantizing precision learning quan-tized architecture generation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部