提出了一种半监督线性近邻传递的相关反馈方法FSLNP(feedback semi-supervised linear neighborhood propagation).该算法不仅能够保持正、负例约束信息,而且能够保持图的局部以及全局相关性结构信息.采用相关反馈的有标签和未知标签图...提出了一种半监督线性近邻传递的相关反馈方法FSLNP(feedback semi-supervised linear neighborhood propagation).该算法不仅能够保持正、负例约束信息,而且能够保持图的局部以及全局相关性结构信息.采用相关反馈的有标签和未知标签图像点,找到比较好的表示图像相关性的一个图结构,来揭示图像点的语义间结构关系.实验结果表明:该算法可以提高检索的准确度,而且在经过长期学习后可以获得一个优化相关性的图结构.展开更多
本文将先验鉴别信息引入到降维过程中,融合线性近邻传递模型,提出了半监督增强线性近邻传递算法S-ILNP(Semi-supervised Incremental Linear Neighborhoods Propagation)。该方法首先利用先验标签信息构建类间和类内图,再依据拉普拉斯...本文将先验鉴别信息引入到降维过程中,融合线性近邻传递模型,提出了半监督增强线性近邻传递算法S-ILNP(Semi-supervised Incremental Linear Neighborhoods Propagation)。该方法首先利用先验标签信息构建类间和类内图,再依据拉普拉斯映射原理实现维数约减,运用线性近邻传递实现半监督学习,标签信息由全局一致性假设,通过局部最近临,从有标签数据点进行全局传递标注。该算法充分利用先验鉴别信息,显著提高了图像检索的准确度。展开更多
In recent years,Deep Learning(DL)technique has been widely used in Internet of Things(IoT)and Industrial Internet of Things(IIoT)for edge computing,and achieved good performances.But more and more studies have shown t...In recent years,Deep Learning(DL)technique has been widely used in Internet of Things(IoT)and Industrial Internet of Things(IIoT)for edge computing,and achieved good performances.But more and more studies have shown the vulnerability of neural networks.So,it is important to test the robustness and vulnerability of neural networks.More specifically,inspired by layer-wise relevance propagation and neural network verification,we propose a novel measurement of sensitive neurons and important neurons,and propose a novel neuron coverage criterion for robustness testing.Based on the novel criterion,we design a novel testing sample generation method,named DeepSI,which involves definitions of sensitive neurons and important neurons.Furthermore,we construct sensitive-decision paths of the neural network through selecting sensitive neurons and important neurons.Finally,we verify our idea by setting up several experiments,then results show our proposed method achieves superior performances.展开更多
基于差分隐私的深度学习隐私保护方法中,训练周期的长度以及隐私预算的分配方式直接制约着深度学习模型的效用.针对现有深度学习结合差分隐私的方法中模型训练周期有限、隐私预算分配不合理导致模型安全性与可用性差的问题,提出一种基...基于差分隐私的深度学习隐私保护方法中,训练周期的长度以及隐私预算的分配方式直接制约着深度学习模型的效用.针对现有深度学习结合差分隐私的方法中模型训练周期有限、隐私预算分配不合理导致模型安全性与可用性差的问题,提出一种基于数据特征相关性和自适应差分隐私的深度学习方法(deep learning methods based on data feature Relevance and Adaptive Differential Privacy,RADP).首先,该方法利用逐层相关性传播算法在预训练模型上计算出原始数据集上每个特征的平均相关性;然后,使用基于信息熵的方法计算每个特征平均相关性的隐私度量,根据隐私度量对特征平均相关性自适应地添加拉普拉斯噪声;在此基础上,根据加噪保护后的每个特征平均相关性,合理分配隐私预算,自适应地对特征添加拉普拉斯噪声;最后,理论分析该方法(RADP)满足ε-差分隐私,并且兼顾安全性与可用性.同时,在三个真实数据集(MNIST,Fashion-MNIST,CIFAR-10)上的实验结果表明,RADP方法的准确率以及平均损失均优于AdLM(Adaptive Laplace Mechanism)方法、DPSGD(Differential Privacy with Stochastic Gradient Descent)方法和DPDLIGDO(Differentially Private Deep Learning with Iterative Gradient Descent Optimization)方法,并且RADP方法的稳定性仍能保持良好.展开更多
文摘提出了一种半监督线性近邻传递的相关反馈方法FSLNP(feedback semi-supervised linear neighborhood propagation).该算法不仅能够保持正、负例约束信息,而且能够保持图的局部以及全局相关性结构信息.采用相关反馈的有标签和未知标签图像点,找到比较好的表示图像相关性的一个图结构,来揭示图像点的语义间结构关系.实验结果表明:该算法可以提高检索的准确度,而且在经过长期学习后可以获得一个优化相关性的图结构.
文摘本文将先验鉴别信息引入到降维过程中,融合线性近邻传递模型,提出了半监督增强线性近邻传递算法S-ILNP(Semi-supervised Incremental Linear Neighborhoods Propagation)。该方法首先利用先验标签信息构建类间和类内图,再依据拉普拉斯映射原理实现维数约减,运用线性近邻传递实现半监督学习,标签信息由全局一致性假设,通过局部最近临,从有标签数据点进行全局传递标注。该算法充分利用先验鉴别信息,显著提高了图像检索的准确度。
基金supported by the National Key R&DProgram of China(No.2021YFF0602104-2)。
文摘In recent years,Deep Learning(DL)technique has been widely used in Internet of Things(IoT)and Industrial Internet of Things(IIoT)for edge computing,and achieved good performances.But more and more studies have shown the vulnerability of neural networks.So,it is important to test the robustness and vulnerability of neural networks.More specifically,inspired by layer-wise relevance propagation and neural network verification,we propose a novel measurement of sensitive neurons and important neurons,and propose a novel neuron coverage criterion for robustness testing.Based on the novel criterion,we design a novel testing sample generation method,named DeepSI,which involves definitions of sensitive neurons and important neurons.Furthermore,we construct sensitive-decision paths of the neural network through selecting sensitive neurons and important neurons.Finally,we verify our idea by setting up several experiments,then results show our proposed method achieves superior performances.
文摘基于差分隐私的深度学习隐私保护方法中,训练周期的长度以及隐私预算的分配方式直接制约着深度学习模型的效用.针对现有深度学习结合差分隐私的方法中模型训练周期有限、隐私预算分配不合理导致模型安全性与可用性差的问题,提出一种基于数据特征相关性和自适应差分隐私的深度学习方法(deep learning methods based on data feature Relevance and Adaptive Differential Privacy,RADP).首先,该方法利用逐层相关性传播算法在预训练模型上计算出原始数据集上每个特征的平均相关性;然后,使用基于信息熵的方法计算每个特征平均相关性的隐私度量,根据隐私度量对特征平均相关性自适应地添加拉普拉斯噪声;在此基础上,根据加噪保护后的每个特征平均相关性,合理分配隐私预算,自适应地对特征添加拉普拉斯噪声;最后,理论分析该方法(RADP)满足ε-差分隐私,并且兼顾安全性与可用性.同时,在三个真实数据集(MNIST,Fashion-MNIST,CIFAR-10)上的实验结果表明,RADP方法的准确率以及平均损失均优于AdLM(Adaptive Laplace Mechanism)方法、DPSGD(Differential Privacy with Stochastic Gradient Descent)方法和DPDLIGDO(Differentially Private Deep Learning with Iterative Gradient Descent Optimization)方法,并且RADP方法的稳定性仍能保持良好.